SlideShare a Scribd company logo
1 of 42
SLA & LANGUAGE
TESTING / ASSESSMENT
The relationship between language testing and second language acquisition, revisited
Elana Shohamy*( paper)
Second language interaction: current perspectives and future trends
Micheline Chalhoub-Deville University of Iowa (paper)
Interfaces Between Second Language Acquisition and Language Testing Research
6 Testing methods in context-based second language research Instructor: Dr. Glamreza Abbasian
Dan Douglas
4 Strategies and processes in test taking and SLA Provider: Kobra (Minoo) Tajahmadi
Andrew D. Cohen PhD Student Tehran University South Branch
Student Number: 9675210044
The paper examines the relationship between and the relevance of second language
acquisition (SLA) and language testing (LT).
Based on three dimensions of potential contributions of LT to SLA
 (1) defining the construct of language ability;
 (2) applying LT findings to test SLA hypotheses; and
 (3) providing SLA researchers with quality criteria for tests and tasks
 and three dimensions of potential contribution of SLA to LT
 (1) identifying language components for elicitation and criteria assessment;
 (2) proposing tasks for assessing language; and
 (3) informing language testers about differences and accommodating these differences,
 this paper examines the interfaces of the two fields based on articles published in
recent issues of the journals ``Language Testing'' and ``Studies in Second Language
Acquisition''.
Method 3.1. Research question & 3.2 Data collection procedures
The research questions posed in this paper refer to two dimensions: 1. the interfaces between SLA and
LT 2. the relevance of LT to SLA as perceived by SLA researchers.
3.2.1. Interfaces
The extent to which the two fields contribute to one another is based on a number of published
articles in SLA and LT journals. Three issues of ``Studies in Second Language Acquisition'' (SSLA, Vol. 21,
issue Nos. 1, 3, and 4) and three issues of ``Language Testing'' (Vol. 13, issue Nos. 1, 2, and 3) were
selected as sources of data for they represent the two fields, respectively. These articles were analyzed
according to the six areas of potential contribution mentioned above.
3.2.2. Relevance
The relevance of LT to SLA was examined based on written interviews with leading researchers who
have familiarity and experience with LT. An attempt was made to select researchers from diverse areas
in SLA who are knowledgeable about the field of LT. The researchers come from areas of applied
linguistics, SLA and LT; they are all of high stature in the field and have national and/or international
reputations. Each of the respondents was asked to express his/her perceptions with regards to the
relevance of LT to their area of expertise in response to the written electronic mail letter. Ten people
were contacted, eight responded. Information regarding their background and areas of research in SLA
are included.
Results: 4.1. Interfaces 4.1.1. Contribution of LT to SLA research
The analysis is based on 10 articles appearing in the three issues of SSLA (Vol. 21, 1999 Ð Williams, 1999; Myles et al., 1999; Moyer, 1999 and Ortega, 1999 in
issue No. 1; Pienemann and Hakansson, 1999; Izumi et al., 1999 and Saito, 1999 in issue No. 3 and Rosa and O'Neill, Mackey, and Rott, issue No. 4).
 The following patterns were observed regarding the contribution of LT to SLA.
 In terms of defining constructs of language ability identified in LT, none of the SLA articles focused on any issue of LT and
none of the articles applied findings obtained in LT to their research. Yet, there were often references to problems and
issues relating to testing, especially with regards to elicitation procedures.
 In terms of applying LT findings to SLA research, none of the articles make any reference to research in LT and, as noted
above, even within the context of discussions about elicitation procedures. It almost seems like research on elicitation
procedures is not viewed as part of the LT discipline.
 In terms of applying quality criteria (i.e. reliability and validity) developed in LT to the various data collection tools used in
the SLA research, there is very little evidence of such applications. While all the SLA research articles use a variety of data
collection procedures, often very innovative, very rarely are there any uses, information and applications of the
psychometric techniques developed in LT for examining the quality of these SLA tools. Thus, most of the articles do not
provide any information with regards to reliability, validity and items of the tests and tasks.
4.1.2. Contribution of SLA to LT
This analysis is based on the articles appearing in three issues of ``Language Testing'' (Vol. 16, 1999 Ð Freedle and Kostin, 1999; Laufer and
Nation, 1999 and Papjohn, 1999 in issue No. 1; Belgar and Hunt, 1999; Kormos, 1999; Schmitt, 1999 and Brown, 1999 in issue No. 2; and
Marquardt and Gillam, 1999; Holm et al., 1999; Windsor et al., 1999; Cardell and Chenery, 1999 in issue No. 3).
The following patterns were observed regarding the contribution of SLA to LT.
 In terms of identifying components from SLA for the elicitation procedures, most articles in LT are based on
phenomena identified in SLA research such as vocabulary acquisition or oral performance but applied to
testing situations.
 In terms of identifying tasks for assessing language, there is no evidence that SLA has any impact on the
methods used for assessment as all LT articles employ tests exclusively and not any other methods of
language elicitation.
 In terms of accommodating differences found in SLA in designing tests, there is hardly any evidence of this as
most studies refer to the learners as homogenous groups. The only article that addresses the issue is the one
by Brown (1999) which discusses the relative importance of language background of individual test takers to
test performance.
4.2. Relevance:
The following views were expressed by researchers in SLA regarding the relevance of LT.
Interviewee 1: In theory LT is very relevant but not in practice, there is much more that could be done. Not enough
assessment of inter-language, pragmatics, and LT holds a narrow view of language. Language tests do not address the
complexity and the dynamics of language in a non-linear way
Interviewee. 2: LT determines in advance what people should know, yet this is not what test takers really do. Language
testers are not paying enough attention to the learner and his/her attitudes towards tests
Interviewee. 3: LT is based on old views of SLA and does not relate to practical needs. It does not relate to language
processing and holds on to the black-box approach pretending that there is no idea of what's going on within that machine.
Also, there is lack of attention to current needs in Europe
Interviewee. 4: LT is very relevant, especially in the political and curricular context and has a pivotal power position. Yet, the
profession tends to take whatever assessment or testing practices prevail as the `natural' situation, thus testing absolutely
everything we do, from research foci, to research design, to the reporting and analyzing of results, to potential implications
for curriculum development, teaching practice, materials development, and, last but certainly not least, to major educational
and social policy decisions. The influence/relevance of assessment is more crucial now than ever since things do not change
unless some key assumptions and practices in assessment/testing will change and can become institutionalized. A need to
devise more sophisticated assessment practices and to depart from what has been the customary playground, conceptual
apparatus, methodology, and power position of the `psychometric folks'
Interviewee. 5: SLA researchers and LT researchers work in parallel to solve problems of learning second languages. SLA
researchers should be better informed about work in LT as many breakthroughs in language testing theory are centrally
relevant to SLA
Interviewee. 6: LT is not sensitive to teaching and learning and to the educational context. SLA researchers have
inadvertently violated certain basic principles of sound testing. By the same token, language testers have sometimes been
culpable of preparing tests that violate basic SLA principles (such as the concern for contextual variables) that would have
made the tests more valid
Interviewee. 7: LT is not sensitive to the needs of individual instructors and to the realities of running language programs.
The emphasis on statistics by testing communities sti¯es creativity in test development
Interviewee. 8: Not relevant as long as it is testing and not assessment. A need to work for relevance
Without locating and understanding language testing within a broad spectrum of assessment situations, those
concerned with the work of language testing are in danger of, on the one hand, attempting to colonise areas that are
inappropriate (by expanding the use of tests and testing procedures to contexts where other forms of assessment may
be more useful for teachers, learners, parents and administrators), and, on the other, ignoring their potential
contribution to other (often highly complex) arenas in which language assessment is taking place (by failing to alert
naive practitioners of language assessment to the technical dimensions of what they are doing or propose to do).
Conclusions & interpretations
 The evidence of interfaces and relevance: …limited in its scope,… limited interfaces between SLA and LT; …by and large LT
has limited relevance to SLA...LT is not being used by SLA researchers; in terms of construct, procedures, applications of
findings,& not in terms of applying criteria for developing quality data-collection instruments of reliability and validity.
 From the analysis of the LT research it is evident that language testers do refer and consider topics and issues of SLA in test
development and use, yet the type of research is still restricted to tests as the main data-collection tool while other
procedures used by SLA researchers for data collection are not used by language testers.
 Further, in most cases, language testers do not consider variations among learners.
 Then there is the view that LT is not relevant as it does not address the process by which learners process language, as
language tests determine in advance what will be tested based on what testers believe learners do, while learners process
language differently.
 Language testers need to pay more attention to the learners, to the test takers who take the tests and to the needs of
institutions and groups where tests are needed.
 Today as well, there is limited interaction and relevance between LT and SLA. It is difficult to explain why such a
situation exists, what reasons there are why two disciplines that have common goals operate in such isolated ways,
do not cooperate and do not build on the knowledge gained by each of the fields.
Recommendations & implications
 ..broaden and update the scope of assessment .Or constructing theories of LT on current
views of SLA theories of assessing more components of language, such as interlanguage
variation, language processing, pragmatics, and aspects that address the context, complexity
and dynamics of language.
 .., approaches to LT need to be determined not only by theories of language abilities but
mostly by what learners actually do and how they process language.
 ..involve the test takers in decisions about the type of assessment procedures they feel
comfortable with.
 ..language testers need to market their findings and make SLA researchers aware of and
informed about the work of language testers.
 Language testers need to be aware of the phenomenon whereby research in LT viewed as
the province of a limited group of people who deal with psychometric issues and statistics
and, therefore, it is difficult to gain insight into LT knowledge that may be very relevant.
 …demonstrate to SLA researchers where and how research in LT can be useful and relevant.
 Language testers need to be more attuned to the educational context where language tests
are being used most often, as part of teaching and learning and the realities of language
programs.
 …language testers need to broaden their views of testing to assessment; from the
psychometric and statistical approaches and from tests to different types of assessment
procedures.
 …, language testers need to work for relevance.
Second language interaction: current perspectives and future trends
Since the late 1960s, the language testing field has earnestly occupied itself with the nature of the L2
construct. Kunnan (1998) states that findings advanced by most researchers support the claim that L2
is multicomponential; and indicates that research so far has not been able to identify the specific
components underlying this multidimensional construct and how these components interact.
Douglas (2000: 25) makes a similar point: ‘[a]s has become clear in recent years through empirical
studies conducted by language testers and others, language knowledge is multicomponential;
however, what is extremely unclear is precisely
what those components may be and
how they interact in actual language use’.
In short, while theoretical arguments and empirical evidence have ascertained the
multicomponentiality of the L2 construct, consensus is absent regarding the nature of these
components and the manner in which they interact.
II Interaction in L2 use
Given the lack of consensus on the specific components that underlie the L2 construct and how these components
interact in language use, it is not surprising that different models exist (see accounts of these models in McNamara,
1996; Chalhoub-Deville, 1997; Chapelle, 1998; Chalhoub-Deville and Deville, in press).
 According to Alderson and Banerjee (2002), however, the model that represents the state-of-the art in the field is the
communicative language ability (CLA) model (Bachman, 1990; Bachman and Palmer, 1996). Moreover,
CLA has recently found recognition in the general measurement literature (Bachman, 2002a). Consequently,
discussion of the nature of interaction in L2 use focuses mainly on the CLA representation.
 In discussing interactional communicative ability, Bachman (1990:302) cites the work of :
 Kramsch (1986) and states that the ‘ “interactional” aspect of the approach is rooted in the same views of language
and language use that have informed communicative language teaching, an aspect that Kramsch (1986) has referred
to as “interactional competence”’.
In defining and explicating interactional competence, Kramsch (1986: 367) writes:
successful interaction presupposes not only a,
 external context of communication or’ shared knowledge of the world, but also the construction of a shared
 internal context or ‘sphere of inter-subjectivity’ that is built through the collaborative effort of the interactional
partners.
 Bachman focuses on the abilities the individual possesses and one surmises that the context is important to
the extent that it helps draw out those intended abilities.
 Bachman emphasizes both a construct-based and a task-based approach to test design, which gives tasks (or contexts)
equal prominence in test design and interpretation. Nonetheless, Bachman still posits a distinction between the abilities
targeted and the context in which they are observed.
 This position diverges from that advocated by proponents of interactional competence, who view the language
use situation primarily as a social event in which ability, language users, and context are inextricably meshed.
 In considering the interactional competence perspective, the following representation should be considered:
‘ability – in language user – in context’. This representation claims that the ability components the language user brings to
the situation or context interact with situational facets to change those facets as well as to be changed by them.
The facet aspects of the context the language user attends to dynamically influence the ability
features activated and vice versa. This perspective, in essence, maintains that ability and
context features are intricately connected and it is difficult or impossible to disentangle them.
 Young (2000) maintains that because both ability models and interactional models consider
a learner’s abilities as well as contextual features, the differences between the two might
appear trivial. He argues, however, that these models represent fundamentally different
perspectives on language use.
 Young (2000: 9) disagrees with the notion that language use is the manifestation of ‘a trait
or bundle of traits’ that can be assessed in an individual. He also does not support the
notion of a distinction between ability and performance.
In conclusion
 In its current cognitive formulation, CLA portrays a link between ‘ability – in language user’ and ‘the
context’, underscoring the two as important but separate entities.
 The social perspective of interaction, however, rejects that postulation in favor of one that positions
‘ability – in language user – in context’.
The social interactional perspective compels language testers to address two fundamental challenges:
1. · amending the construct of individual ability to accommodate the notion that language use in a
communicative event reflects dynamic discourse, which is co-constructed among participants; and
2. · the notion that language ability is local and the conundrum of reconciling that with the need for
assessments to yield scores that generalize across contextual boundaries.
III Social-cognitive construct representation
The interactional, co-constructionist perspective is grounded in anthropology, sociology, and
developmental psychology. It is inspired by the work of the Russian developmental psychologist Lev
Vygotsky (1896-1934; Vygotsky, 1978) and his followers. Sfard (1998) has characterized the debate among
learning researchers in terms of two metaphors:
 the acquisition metaphor (AM) that views the learner as developing “basic units of knowledge that
can be accumulated, gradually refined, and combined to form ever richer cognitive structures” (p. 5).
..the learner is viewed “as a person who constructs meaning” (p. 5).
 the participant metaphor (PM) which opposes the view that knowledge is a stand-alone entity and
emphasizes that knowledge construction is contextual, culturally embedded, and socially mediated.
Along these lines, Resnick (1994) advances the term ‘situative cognition’.
IV Bridging the conceptual divide 1 Score inference and generalization
Evaluating examinees’ performances according to the PM or the social interactional
perspective offers a serious challenge to language testers in terms of the generalizability of
scores. If internal attributes of ability are inextricably enmeshed with the specifics of a
given situation, then any inferences about ability and performance in other contexts are
questionable.
The idea of transfer of conceptual schemes is at the heart of the issue of generalization,
with which learning theorists and researchers also struggle. Schoenfeld (1999) makes a
similar point. He argues that transfer explains the manner in which learners are capable of
applying knowledge and skills in situations other than those in which they were
developed.
In addition to transfer, researchers discuss the notions of practice and familiarity with a given context to
help explain the generalizability of performances across contexts. …In other words, the synergy between
someone’s ability attributes and the features of a given context is not random and not likely to be
transient.
Theory and research
Chalhoub-Deville (1997) differentiates between theoretical models and assessment frameworks. She states
that theoretical models, such as CLA, provide a comprehensive representation of the L2 construct and
include aspects of context. She adds that assessment frameworks, which are more manageable but still
derived from theoretical models, focus on components deemed salient in the given assessment context.
Increasingly, researchers are calling for the development of more local theories that examine abilities as
engendered by the features of specific contexts. Moreover, and what is perhaps more significant, some
researchers question the feasibility and value of connecting these local theories into a general,
comprehensive one.
 In short, the ideas presented by social interactionalists are rich and deserving of deliberation and
development in order to glean a richer understanding of the nature of interaction in the L2 construct.
 In addition to considering the theoretical implications of the social interactional paradigm, language
testers need to deliberate the orientation of the social interactional research as well.
 Social interactional investigations would consider focused hypotheses of the complex interaction of
linguistic and nonlinguistic knowledge, cognitive, affective, and conative attributes engaged in particular
situations.
 A good starting point for such research would be a ‘close analysis of naturally occurring discourse and
social interaction [to] reveal the standards that apply in reality in particular settings’ (McNamara, 1997:
457).
6 Testing methods in context-based second language research
Dan Douglas
In an important paper J. B. Carroll (1968) articulated very succinctly, albeit unintentionally, the
relationship between language testing (LT) and second language acquisition (SLA) research by
defining a test as "a procedure designed to elicit certain behavior from which one can make
inferences about certain characteristics of an individual" (p. 6). If some of the "characteristics"
include the linguistic (as they do), then the link between LT and SLA is clear: A language test is
an SLA elicitation device. However, it has largely been the case that language testing concerns
and SLA concerns have tended to develop in their separate ways.
In a discussion of mutual influences between SLA research and language resting (LT), Bachman
(1989) notes a growing consensus among language resting researchers that language ability consists
of a number of distinct but related component abilities. This multicomponcntial view of language
ability,
Wrestling with "context" in LT and SLA research
Since the early 1 980s, a recurring theme in second language acquisition studies has been the role of context
both in the acquisition of the language code itself and in the development of communicative ability(for
discussion see Ellis & Roberts 1987; Larsen-Freeman Long 1991).
Context has been defined in various ways by researchers, and has been associated with such notions as
"situation, setting," "domain, environment, task, " 'content, "topic," "schema, " frame, “script and
"background knowledge." Traditionally, context has been seen as containing both linguistic and extralinguistic
elements, and both types of elements may be analyzed at a micro or a macro level.
 A micro-linguistic analysis, for example, might focus on subjects' ability to supply plural morphemes in
obligatory contexts, such as "There are two…… in the nest," whereas
 A macro-linguistic analysis might focus on the discourse structure of an interaction and its effect on syntactic
choice.
 A micro-situational analysis might be concerned with lexical choice with different interlocutors, whereas
 A macro-situational analysis might study learning in classrooms versus learning in "natural" settings.
For the purposes of this discussion, I (Douglas) Will collapse the distinction between linguistic and situational
context as irrelevant to the main thesis, which is that context is a social-psychological construct that results from
attention to a variety of external and internal factors.
A confounding factor in achieving more specificity in the study of context and SLA has been a concomitant
uncertainty a bout the nature of language knowledge itself. As has become clear in recent years through
empirical studies conducted by language testers and others, language proficiency is multicomponcntial (see,
e.g., Canale 1983; Sang, Schmitz, Vollmer, Baumert, & Roeder 1986; Bachman, 1990)
As Hymes (1972) originally formulated communicative competence involves judgments about what is systemically possible,
psycholinguistically feasible, and socioculturally appropriate, and about what is entailed in the actual accomplishment of a
particular linguistic event. …about how particular language tasks may be accomplished, but also includes events not within the
user's control, thus bringing into the equation of language proficiency many situational and interactional factors that we
normally think of as chance …
..for Hymes, "competence" is more than grammatical knowledge; it has to do with rules for use: "Competence is dependent
upon both [tacit] knowledge and [ability] for use" . It is not ..Chomskian competence; : "Social life has affected not merely
outward performance, but inner competence itself" (p:274).
Hymes's own SPEAKING mnemonic:
 Situation: physical and temporal setting, psychological and cultural scene
 Participants: speaker/sender, addressor, hearer/receiver/audience, addressee
 Ends: outcomes, goals
 Act sequence: message form and message content
 Key: tone, manner
 Instrumentalities: channels, codes
 Norms: norms of interaction, norms of interpretation Genres: categories of speech events
Context (as a social-psychological construct)
 A context is constructed by the participants in the communicative event. A salient feature of context is that it
is dynamic, constantly as a result of negotiation between and among the interactants as they construct it,
turn by turn. …context is not an object, but a dynamic social-psychological accomplishment, and that
therefore, as Erickson and Shultz (1981) put it, in a paper instructively titled "When is a context?" :
 "The production of appropriate social behavior from moment to moment requires knowing what context one
is in and when contexts change as well as knowing what behavior is considered appropriate in each of those
contexts" (p. 147).
 In the study of context and its effect upon SLA and use, it is important to take account of those features of
external context that participants attend to in constructing an internal context (with the proviso that we may
never be able to know with any precision what those features are because such assessment is dynamic,
internal, highly personal, and to some degree unconscious). Gumperz ( 1976) has referred to "contextualization
cues changes in voice tone, pitch, tempo, rhythm, code, topic, style, posture, gaze, and facial expression — which are
culturally conventional, highly redundant signals that Interactants attend to in their mutual construction of context.
Discourse domains
 Selinker and Douglas ( 1985) proposed the notion of discourse domain to clearly distinguish the idea of
external features of context from the internal interpretation of them in the learner. More recently they
have defined discourse domain as a cognitive construct created by a language learner as a context for
interlanguage development and use (Douglas Selinker 1989, 199S).
 Douglas and Selinker ( 1985) hypothesized that language test takers create intelligibility in language tests
either by engaging already existing discourse domains or by creating temporary domains for the purpose
of "making sense" of a test item.
 This notion is related to a long held suspicion that test raters may not be interpreting the test items in the
way intended by the test constructors.
 In Hymes's terms, the interpretation of the context would interact with judgments of
 what was formally possible, feasible, and appropriate and
 what the accomplishment of the communicative event would entail.
 A learner might not possess the grammatical, psycholinguistic, or sociolinguistic "wherewithal" to carry out the linguistic
task, or be able to employ effective strategies for dealing with the situation. ( competence includes Knowledge & ability)
 As Widdowson ( 1989) suggests, an individual might possess knowledge of, say, appropriacy, but lack the ability to be
appropriate. This notion is related, too, to the distinction between competence and control made by Bialystok and Smith
(1985).
 The effectiveness and accuracy of comprehension and production would reflect the variable interaction among the
components of competence, which in turn would be determined by "domain strength," or, we might say, by how
comfortable, knowledgeable, and in control the language user felt about the interpretation.
 A result of discomfort might be rhetorical/grammatical Inaccuracy, a sociolinguistically inappropriate response, a lack
of precision, or a lack of cohesion.
 On other hand, a high degree of comfort with the interpretation could produce more accuracy, precision, and so on,
even (and this is crucial) if the interpretation were "wrong. "
Testing methods in context-based second language research
 Douglas and Selinker (1991, 1993) have explored this area in two empirical studies in which the goal was to investigate the
effect of manipulating test method facets (Bachman 1990) — such as the instructions, vocabulary, contextualization,
distribution of information, level of abstraction, topic, and genre — on outcomes.
 when performing a language test task in their own field of study (mathematics), examinees will not necessarily produce
"better" language — but they will produce different language, allowing score users to make a more useful interpretation
of the performance in terms of a candidate's language ability for the specific purposes they are interested in.
 Continuing with the findings of Douglas and Selinker, raters of the speech protocols seemed to be rating a category of
"rhetorical complexity" but calling it "grammar," since this was the only relevant category available. This was evident only
in subsequent rhetorical/grammatical interlanguage analyses.
 Finally, evidence was found (albeit inconclusively) that the field-specific tests provided a measurement advantage over
the general test in predicting judges' ratings of candidates' suitability for assignment to teaching duties, using the results
of a performance rest as a criterion. Perhaps the most important outcome of this research has been the realization that
the job of producing contextualized language tests that present relevant features of context to engage discourse domains
in test takers is going to be punishingly difficult.
As a summary suggestions concerning testing methods as for context-based LT and SLA research:
l. Any factor researchers change in the test domain can lead to changes in an interlanguage user's perceptions and
assessment of the communicative situation, and thus to changes in interlanguage performance on the test.
2. Rather than attempting to minimize the effects of test method facets on the interpretation of results, LT and SLA
researchers should make use of them to design tests for particular populations.
3. Context-based language tests (i.e., those designed with the express intent of engaging specific discourse domains) can be
constructed by manipulating a number of test method facets, such as test environment (administration, personnel, and
location), Instructions (domain specific reasons for carrying out the test tasks), and language (sub technical as well as
technical language, context-embedded discourse, level of precision, field-specific topic, authentic genre, and pragmatic
and sociolinguistic domain-related features).
4. There is likely to be a threshold level of context-based method facets, or situational authenticity, necessary for domain
engagement, and a corresponding threshold level of communicative competence necessary to respond to the context.
5. Subjects taking context-based tests will be more likely, as a result of domain engagement, to focus on content than on
form.
6. Context-based test tasks may be inherently more complex than those on general purpose tests.
7. The interlanguage produced by subjects taking context-based tests will not necessarily be more complex or even more
target like, compared with that produced in general tests, but performance on context based tests will allow for
interpretation of language ability for specific purposes.
8. Raters or judges of context-based tests may need to have some sense of the content accuracy of responses, and may rate
context-based test in SL responses more conservatively as a function of not knowing how accurate the responses are.
9. Rhetorical/grammatical interlanguage analysis may be necessary to disambiguate gross subjective ratings on context-
based tests.
10. Context-based tests may provide more useful information than general purpose tests when the goal is to make field-
specific judgments about subjects' interlanguage communicative competence.
4 Strategies and processes in test taking and SLA Andrew D. Cohen
A process approach to test taking
Since the late 1970s, interest has slowly begun to grow in approaching second language (L2) testing
from the point of view of the strategies used by respondents going through the process of taking the test
(e.g., Cohen Aphek 1979…Anderson 1989; Nevo 1989). By the 1990s, L2 testing textbooks acknowledged
this concern as a possible source of insights concerning test reliability and validity (Bachman 1990;
Cohen 1994a).
Tests that are relied upon to indicate the comprehension level of readers may produce misleading
results because of numerous test-wise techniques that readers have developed for obtaining correct
answers on such rests without fully or even partially understanding the text.
Students may get an item wrong for the right reasons or right for the wrong reasons. Discovering that a
respondent used poor logic in attempting to answer a reading comprehension item may be of little
interest to the test constructors if the problem resides solely with the respondent.
 For example, respondents may plod laboriously through a text only to find that once they reach the multiple choice
questions, they have forgotten most of what they read or have failed to focus adequately on those elements being
tested. In such a case, the strategy of studying the questions carefully before reading the text might have been more
beneficial.
 In a traditional achievement testing situation, the testers usually check for control of language that the respondents
have been taught. The respondents know that there is a premium put on better performance. Whereas in SLA
research tasks the respondents can get points for communication
 Then in spite of inaccuracies, and their performance does not usually affect their grade in a language course, in
instructional achievement testing the respondents must perform accurately, often under time constraints, and are
held accountable for their responses.
 In recent years, the distinction between assessment for the purposes of SLA research versus assessment for
instructional purposes has lessened somewhat. More and more tasks and tests are being used interchangeably. As
language tests have become a greater part of SLA research, there has been a growing concern for the reliability and
validity of such measures.
What is meant by test-taking strategies?
Language use strategies are mental operations or processes that learners consciously select when
accomplishing language tasks. These strategies also constitute test-taking strategies when they are applied to
tasks in language tests.
….. the notion of strategy implies an element of selection. Otherwise, the processes would not be considered
strategies. Strategies vary according to context.
 One strategy is to opt out of the language task at hand (e.g., through a surface matching of identical
information in the passage and in one of the response choices).
 Another is to use shortcuts to arrive at answers (e.g., not reading the text as instructed but simply looking
immediately for the answers to the given reading comprehension questions).
 In such cases, the respondents may be using test wiseness to circumvent the need to tap their actual
language knowledge or lack of it, consistent with Fransson's (1984 ) assertion that respondents may not
proceed via the text but rather around it. In some cases, quite the contrary holds true.
The ability of learners to use language strategies has been referred to as their strategic competence — a
component of communicative language use (Canale & Swain 1980). This model puts the emphasis on
"compensatory" strategies — that is, strategies used to compensate for or remediate a lack in some language
area.
Bachman (1990) provided a broader theoretical model for viewing strategic competence. Bachman and Palmer
(1996) refined the Bachman (1990) categories somewhat. Their current framework includes an assessment
component, whereby the respondents (in the case of language testing) assess :
 which communicative goals are achievable and
 what linguistic resources are needed;
 a goal-setting component, wherein the respondents identify the specific tasks to be performed; and
 a Planning component, whereby the respondents retrieve the relevant items from their language knowledge
and formulate a plan for their use in a response.
They may use lexical avoidance, simplification, or approximation
 when the exact word escapes than under the pressure of the test situation or
 because they simply do not know the word well or at all.
 Thus, in keeping with the Bachman and Palmer mode, when respondents are given a
situation in which to perform an oral role play :
 They may assess the situation and identify the information that is needed in that context.
 They may also set their general goals.
 They may plan their specific response and go about retrieving from their language
knowledge the grammatical, discourse, and sociocultural features needed for the role play.
 Then at some point they execute the role play. After they have finished, they may again
perform an assessment to evaluate the extent to which the communicative goal was
achieved.
 As is the case with any theoretical model, test takers may make differential use of the
components of this model when performing specific testing tasks. Hence, there are
respondents who might not assess the situation before starting the role play.
The use of verbal report in identifying test-taking strategies
The field of research methods has supplied us with suggested approaches for looking at test
taking.
1. One means is through observation of what respondents do during tests.
2. Another means is through designing items that are assumed to require the use of certain
strategies (e.g., cloze items calling for anaphoric reference) and adding up the correct
responses as an indicator of strategy use.
3. A third approach is through the use of verbal report, while the items are being answered,
immediately afterward, or some time later on.
Verbal report techniques have constituted a major tool in the gathering of data on test-raking
strategies. Such techniques were initially developed in first language and then second language
acquisition research in order to study the processes of reading and writing. Verbal reports have
helped determine how respondents actually take rests of various kinds (Nickerson 1989; Norris
Strategies for taking tests of reading and writing
In considering strategies on tests of reading and writing skills,
1. The focus first on two more indirect testing formats, multiple choice and cloze,
2. strategies for three more direct formats, namely, summarization tasks, open-ended questions, and compositions.
The chapter closes with several suggestions that may lead to more effective test taking. Unless it is specified that the activity
constituted a research task, the tasks and tests for which the respondents reported strategy use contributed to their course
grades in language classes.
MULTIPLE CHOICE TEST FORMATS
Investigating a standardized test of English reading
MacKay also gave an example of an item missed for the wrong reasons. The statement "The cat has been out in the rain again" had to be linked
to one of three pictures,
The student perceived the dotted wallpaper as snow and decided that this picture was of the exterior of the house. Thus, he gave
the dripping raincoat as the correct answer. Once the child had perceived the wallpaper as snow and thus had eliminated the third
picture, his selection of the first picture, the dripping raincoat, rather than the second, was perfectly reasonable — even though
cats do not wear raincoats. If this kind of logic was shared by numerous respondents, it might suggest the need to improve the
pictures in order to eliminate any ambiguities in them.
The bird built his own house to a picture, a student chose a nest of twigs with eggs in the middle over a wooden
birdhouse because he claimed that some big birds could not fit in the hole of the birdhouse. MacKay noted that the
student chose the right picture for the wrong reason. The student missed the element that people, not birds, are
responsible for building wooden birdhouses with perches.
Another study of test-taking strategies among non-natives (Anderson et al. 1991) revealed that respondents used certain
strategies differently, depending on the type of question that was being asked. For example, the strategies of "trying to match the
stem with the text" and " guessing" were reported more frequently for inference questions than for direct statement and main idea
question types. The strategy of "paraphrasing" was reported to occur more in responding to direct statement items than with
inference and main idea question types.
READING COMPREHENSION TEST ( strategies for Multiple Choice)
1. Read the text passage first and make a mental note of where different kinds of information are located.
2. Read the questions a second rime for clarification.
3. Return to the text passage to look for the answer.
4. Find the portion of the text that the question refers to and then look for clues to the answer.
Look for answers to questions in chronological order in the text.
6. Read the questions first so that the reading of the text is directed at finding answers to those questions.
7. Try to produce your own answer to the question before looking at the options that are provided in the test.
8. Use the process of elimination — i.e., select a choice not because you are sure that it IS the correct answer, but
because the other choices do not seem reasonable, because they seem similar or overlapping, or because their
meaning is not clear to you.
9. Choose an option that seems to deviate from the others, is special, is different, or conspicuous.
10. Select a choice that is longer/shorter than the others.
11. Take advantage of clues appearing in other items in order to answer the item under consideration.
12. Take into consideration the position of the option among the choices (first, second, etc.).
13. Select the option because it appears to have a word or phrase from the passage in it — possibly a key word.
14. Select the option because it has a word or phrase that also appears in the question.
15- Postpone dealing with an item or a given option until later.
16. Make an educated guess — e.g., use background knowledge or extra textual knowledge in making the guess.
17. Budget your time wisely on this test.
18. Change your responses as appropriate — e.g., you may discover new clues in another item.
CLOZE TEST FORMATS
Research regarding strategies for taking cloze tests is of interest in that it has helped to determine whether such tests
actually measure global reading skills, as they are commonly purported to do. As more studies have been undertaken on the
cloze test, it has become clearer that the instrument elicits more local, word-level reading than it does macro- or discourse-
level reading (Klein-Bralev 1981; Alderson 1983; Lado 1986), contrary to the claims of its early supporters, who have
maintained that cloze assesses global reading {see, e.g., Chihara et al. 1977; CháveL-Oller et al. 1985)…. The results
demonstrated that although the better readers were more likely to use macro-level schemata and strategies in completing
the cloze and also did better on the completion task as a whole, all respondents favored micro level reading at the sentence
level
Thus, the research on strategies in taking cloze tests would suggest that such tests assess local-level reading more than they
measure global reading ability. Furthermore, such tests are more likely to test for local-level reading when they are in a
foreign language (see, e.g., MacLean & d'Angleian 1986).
More direct formats
SUMMARIZATION TASKS
Whereas more direct formats for testing, such as text summarization, are less likely than indirect formats to elicit test-
taking strategies that take the place of genuine language use strategies, responses to such measures are still influenced by
test-wiseness. As long as the task is part of a test, students are likely to use strategies that they would not use under non-
test conditions. In the case of a summary task, the respondent is invariably summarizing a text for a reader who already has
a notion of what the summary should look like; therefore, the respondent is reacting to a set of perceived and read
expectations on the part of the reader. In the real world, we usually summarize a text for our own future use or for the
benefit of someone who has not read it, in which case the set of expectations of the summarizer may be quite different.
OPEN-ENDED QUESTONS AND COMPOSITIONS
Like summarization tasks, open-ended tasks allow respondents to copy material directly from a text in their response, so that
raters cannot tell whether the respondent in fact understands the material. Such copying may produce linguistically awkward
responses. For example, in a study of 1 9 college-level learners of Hebrew 1.2 that involved a retrospective verbal report one
week after the learners took their final exam, it was found that students lifted material intact from an item stimulus or from a
text passage for use in their written responses (Cohen & Aphek 1979).
Conclusion
This chapter has examined process approaches to language testing, which have usually entailed the use of verbal report
techniques to better understand these processes and the test-taking strategies that respondents use. There are not yet
abundant data linking the specific use of test-taking strategies with success or failure on language tests. On the other hand,
the use of qualitative methodologies such as verbal report provides a valuable source of information — perhaps the most
focused possible — on the strategies respondents used in their responses and why they did so. Verbal report can help us see
what items are actually testing, aiding us in making decisions about which items to keep and which to throw out. One could go
so far as to say that it is now close to essential to have verbal report as part of prertesting/piloting. If a respondent has
legitimate reasons for marking an item wrong, then the item needs to be rewritten.
The emphasis in this chapter has been on the relationship between the characteristics of the test task and the strategies
used, and an the connection between testing methods and SLA research — especially on the types of investigations needed to
provide information as to which testing methods would be potentialy more or less reliable and valid for SLA research.
THANK YOU SO MUCH
This is my Painting
The relationship between Language testing & SLA

More Related Content

What's hot

Testing : An important part of ELT
Testing : An important part of ELTTesting : An important part of ELT
Testing : An important part of ELTMd.Mahroof Hossain
 
Summary of testing language skills from theory to practice part one (hossein ...
Summary of testing language skills from theory to practice part one (hossein ...Summary of testing language skills from theory to practice part one (hossein ...
Summary of testing language skills from theory to practice part one (hossein ...Sedigh (Sid) Mohammadi
 
Language Testing/ Assessment
Language Testing/ AssessmentLanguage Testing/ Assessment
Language Testing/ AssessmentAnn Liza Sanchez
 
Test methods in Language Testing
Test methods in Language TestingTest methods in Language Testing
Test methods in Language TestingSeray Tanyer
 
Designing classroom language test
Designing classroom language testDesigning classroom language test
Designing classroom language testRatih Puspitasari
 
The Age Factor in Second Language Acquisition
The Age Factor in Second Language AcquisitionThe Age Factor in Second Language Acquisition
The Age Factor in Second Language AcquisitionJeanette Carrasquillo
 
Common test techniques
Common test techniquesCommon test techniques
Common test techniquesMaury Martinez
 
Interactionist approach (mobin bozorgi)
Interactionist approach (mobin bozorgi)Interactionist approach (mobin bozorgi)
Interactionist approach (mobin bozorgi)Mobin Bozorgi
 
Language Testing Evaluation
Language Testing EvaluationLanguage Testing Evaluation
Language Testing Evaluationbikashtaly
 
introducing language testing and assessment
 introducing language testing  and assessment introducing language testing  and assessment
introducing language testing and assessmentNajah M. Algolaip
 
Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)Kheang Sokheng
 
discrete-point and integrative testing
discrete-point and integrative testingdiscrete-point and integrative testing
discrete-point and integrative testingindriyatul munawaroh
 
Introduction to Language Assessment by Brown
Introduction to Language Assessment by BrownIntroduction to Language Assessment by Brown
Introduction to Language Assessment by BrownEFL Learning
 
Second language teaching methods
Second language teaching methodsSecond language teaching methods
Second language teaching methodsJaziel Romero
 
The Practice of Language Teaching:Approaches, Methods, Procedures, Techniques
The Practice of Language Teaching:Approaches, Methods, Procedures, TechniquesThe Practice of Language Teaching:Approaches, Methods, Procedures, Techniques
The Practice of Language Teaching:Approaches, Methods, Procedures, TechniquesAna Paredes
 
A framework for materials writing
A framework for materials writingA framework for materials writing
A framework for materials writingMohammed Mallah
 

What's hot (20)

Language testing
Language testingLanguage testing
Language testing
 
Testing : An important part of ELT
Testing : An important part of ELTTesting : An important part of ELT
Testing : An important part of ELT
 
Summary of testing language skills from theory to practice part one (hossein ...
Summary of testing language skills from theory to practice part one (hossein ...Summary of testing language skills from theory to practice part one (hossein ...
Summary of testing language skills from theory to practice part one (hossein ...
 
Language Testing/ Assessment
Language Testing/ AssessmentLanguage Testing/ Assessment
Language Testing/ Assessment
 
Language and linguistics error analysis
Language and linguistics error analysisLanguage and linguistics error analysis
Language and linguistics error analysis
 
Test methods in Language Testing
Test methods in Language TestingTest methods in Language Testing
Test methods in Language Testing
 
Designing classroom language test
Designing classroom language testDesigning classroom language test
Designing classroom language test
 
The Age Factor in Second Language Acquisition
The Age Factor in Second Language AcquisitionThe Age Factor in Second Language Acquisition
The Age Factor in Second Language Acquisition
 
Common test techniques
Common test techniquesCommon test techniques
Common test techniques
 
Interactionist approach (mobin bozorgi)
Interactionist approach (mobin bozorgi)Interactionist approach (mobin bozorgi)
Interactionist approach (mobin bozorgi)
 
Language Testing Evaluation
Language Testing EvaluationLanguage Testing Evaluation
Language Testing Evaluation
 
introducing language testing and assessment
 introducing language testing  and assessment introducing language testing  and assessment
introducing language testing and assessment
 
Types of tests and types of testing
Types of tests and types of testingTypes of tests and types of testing
Types of tests and types of testing
 
Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)
 
discrete-point and integrative testing
discrete-point and integrative testingdiscrete-point and integrative testing
discrete-point and integrative testing
 
Principles of Language Assessment
Principles of Language AssessmentPrinciples of Language Assessment
Principles of Language Assessment
 
Introduction to Language Assessment by Brown
Introduction to Language Assessment by BrownIntroduction to Language Assessment by Brown
Introduction to Language Assessment by Brown
 
Second language teaching methods
Second language teaching methodsSecond language teaching methods
Second language teaching methods
 
The Practice of Language Teaching:Approaches, Methods, Procedures, Techniques
The Practice of Language Teaching:Approaches, Methods, Procedures, TechniquesThe Practice of Language Teaching:Approaches, Methods, Procedures, Techniques
The Practice of Language Teaching:Approaches, Methods, Procedures, Techniques
 
A framework for materials writing
A framework for materials writingA framework for materials writing
A framework for materials writing
 

Similar to The relationship between Language testing & SLA

Assessing Introversion And Extroversion In A Second Language Setting
Assessing Introversion And Extroversion In A Second Language SettingAssessing Introversion And Extroversion In A Second Language Setting
Assessing Introversion And Extroversion In A Second Language SettingGina Brown
 
Qualitative and Descriptive journals analysis by Astrid Aguiar
Qualitative and Descriptive journals analysis by Astrid AguiarQualitative and Descriptive journals analysis by Astrid Aguiar
Qualitative and Descriptive journals analysis by Astrid AguiarAstrid Aguiar
 
What does language testing have to offier
What does language testing have to offierWhat does language testing have to offier
What does language testing have to offierAPSACS
 
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docx
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docxDirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docx
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docxcuddietheresa
 
Language Needs Analysis for English Curriculum Validation
Language Needs Analysis for English Curriculum ValidationLanguage Needs Analysis for English Curriculum Validation
Language Needs Analysis for English Curriculum Validationinventionjournals
 
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee MissJillSmith
 
Ilp soler & martinez
Ilp soler & martinezIlp soler & martinez
Ilp soler & martinezmoji azimi
 
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...An Analysis Of The Oxford Placement Test And The Michigan English Placement T...
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...Katie Robinson
 
Assessing And Validating A Writing Strategy Scale For Undergraduate Students
Assessing And Validating A Writing Strategy Scale For Undergraduate StudentsAssessing And Validating A Writing Strategy Scale For Undergraduate Students
Assessing And Validating A Writing Strategy Scale For Undergraduate StudentsStacy Taylor
 
Strategy used to master english russia context
Strategy used to master english russia contextStrategy used to master english russia context
Strategy used to master english russia contextNurosielawati Mazlan
 
Tsl 3123 Language Assessment Module ppg
Tsl 3123 Language Assessment Module ppgTsl 3123 Language Assessment Module ppg
Tsl 3123 Language Assessment Module ppgRavi Nair
 
saber Bouzara.pptx
saber Bouzara.pptxsaber Bouzara.pptx
saber Bouzara.pptxsaberbouzara
 
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...AJSERJournal
 
Vocabular learning strategies preferred by knorean univ st
Vocabular learning strategies preferred by knorean univ stVocabular learning strategies preferred by knorean univ st
Vocabular learning strategies preferred by knorean univ stPatricia Cuamatzi
 

Similar to The relationship between Language testing & SLA (20)

SLA.pptx
SLA.pptxSLA.pptx
SLA.pptx
 
Assessing Introversion And Extroversion In A Second Language Setting
Assessing Introversion And Extroversion In A Second Language SettingAssessing Introversion And Extroversion In A Second Language Setting
Assessing Introversion And Extroversion In A Second Language Setting
 
Qualitative and Descriptive journals analysis by Astrid Aguiar
Qualitative and Descriptive journals analysis by Astrid AguiarQualitative and Descriptive journals analysis by Astrid Aguiar
Qualitative and Descriptive journals analysis by Astrid Aguiar
 
What does language testing have to offier
What does language testing have to offierWhat does language testing have to offier
What does language testing have to offier
 
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docx
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docxDirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docx
DirectionsLength ~3-4 typed, double-spaced pages (approx. 750-1.docx
 
Language Needs Analysis for English Curriculum Validation
Language Needs Analysis for English Curriculum ValidationLanguage Needs Analysis for English Curriculum Validation
Language Needs Analysis for English Curriculum Validation
 
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee
Summary on LANGUAGE TESTING & ASSESSMENT (Part I) Alderson & Banerjee
 
Wistner
WistnerWistner
Wistner
 
Ilp soler & martinez
Ilp soler & martinezIlp soler & martinez
Ilp soler & martinez
 
Sla stages
Sla stagesSla stages
Sla stages
 
50 3 10_norris
50 3 10_norris50 3 10_norris
50 3 10_norris
 
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...An Analysis Of The Oxford Placement Test And The Michigan English Placement T...
An Analysis Of The Oxford Placement Test And The Michigan English Placement T...
 
Assessing And Validating A Writing Strategy Scale For Undergraduate Students
Assessing And Validating A Writing Strategy Scale For Undergraduate StudentsAssessing And Validating A Writing Strategy Scale For Undergraduate Students
Assessing And Validating A Writing Strategy Scale For Undergraduate Students
 
Strategy used to master english russia context
Strategy used to master english russia contextStrategy used to master english russia context
Strategy used to master english russia context
 
Chamot
ChamotChamot
Chamot
 
Tsl 3123 Language Assessment Module ppg
Tsl 3123 Language Assessment Module ppgTsl 3123 Language Assessment Module ppg
Tsl 3123 Language Assessment Module ppg
 
saber Bouzara.pptx
saber Bouzara.pptxsaber Bouzara.pptx
saber Bouzara.pptx
 
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...
Applying Task-Based Language Teaching (TBLT) To Enhance Students ‘Communicati...
 
Abstact 2
Abstact 2Abstact 2
Abstact 2
 
Vocabular learning strategies preferred by knorean univ st
Vocabular learning strategies preferred by knorean univ stVocabular learning strategies preferred by knorean univ st
Vocabular learning strategies preferred by knorean univ st
 

Recently uploaded

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 

Recently uploaded (20)

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 

The relationship between Language testing & SLA

  • 1. SLA & LANGUAGE TESTING / ASSESSMENT The relationship between language testing and second language acquisition, revisited Elana Shohamy*( paper) Second language interaction: current perspectives and future trends Micheline Chalhoub-Deville University of Iowa (paper) Interfaces Between Second Language Acquisition and Language Testing Research 6 Testing methods in context-based second language research Instructor: Dr. Glamreza Abbasian Dan Douglas 4 Strategies and processes in test taking and SLA Provider: Kobra (Minoo) Tajahmadi Andrew D. Cohen PhD Student Tehran University South Branch Student Number: 9675210044
  • 2. The paper examines the relationship between and the relevance of second language acquisition (SLA) and language testing (LT). Based on three dimensions of potential contributions of LT to SLA  (1) defining the construct of language ability;  (2) applying LT findings to test SLA hypotheses; and  (3) providing SLA researchers with quality criteria for tests and tasks
  • 3.  and three dimensions of potential contribution of SLA to LT  (1) identifying language components for elicitation and criteria assessment;  (2) proposing tasks for assessing language; and  (3) informing language testers about differences and accommodating these differences,  this paper examines the interfaces of the two fields based on articles published in recent issues of the journals ``Language Testing'' and ``Studies in Second Language Acquisition''.
  • 4. Method 3.1. Research question & 3.2 Data collection procedures The research questions posed in this paper refer to two dimensions: 1. the interfaces between SLA and LT 2. the relevance of LT to SLA as perceived by SLA researchers. 3.2.1. Interfaces The extent to which the two fields contribute to one another is based on a number of published articles in SLA and LT journals. Three issues of ``Studies in Second Language Acquisition'' (SSLA, Vol. 21, issue Nos. 1, 3, and 4) and three issues of ``Language Testing'' (Vol. 13, issue Nos. 1, 2, and 3) were selected as sources of data for they represent the two fields, respectively. These articles were analyzed according to the six areas of potential contribution mentioned above. 3.2.2. Relevance The relevance of LT to SLA was examined based on written interviews with leading researchers who have familiarity and experience with LT. An attempt was made to select researchers from diverse areas in SLA who are knowledgeable about the field of LT. The researchers come from areas of applied linguistics, SLA and LT; they are all of high stature in the field and have national and/or international reputations. Each of the respondents was asked to express his/her perceptions with regards to the relevance of LT to their area of expertise in response to the written electronic mail letter. Ten people were contacted, eight responded. Information regarding their background and areas of research in SLA are included.
  • 5. Results: 4.1. Interfaces 4.1.1. Contribution of LT to SLA research The analysis is based on 10 articles appearing in the three issues of SSLA (Vol. 21, 1999 Ð Williams, 1999; Myles et al., 1999; Moyer, 1999 and Ortega, 1999 in issue No. 1; Pienemann and Hakansson, 1999; Izumi et al., 1999 and Saito, 1999 in issue No. 3 and Rosa and O'Neill, Mackey, and Rott, issue No. 4).  The following patterns were observed regarding the contribution of LT to SLA.  In terms of defining constructs of language ability identified in LT, none of the SLA articles focused on any issue of LT and none of the articles applied findings obtained in LT to their research. Yet, there were often references to problems and issues relating to testing, especially with regards to elicitation procedures.  In terms of applying LT findings to SLA research, none of the articles make any reference to research in LT and, as noted above, even within the context of discussions about elicitation procedures. It almost seems like research on elicitation procedures is not viewed as part of the LT discipline.  In terms of applying quality criteria (i.e. reliability and validity) developed in LT to the various data collection tools used in the SLA research, there is very little evidence of such applications. While all the SLA research articles use a variety of data collection procedures, often very innovative, very rarely are there any uses, information and applications of the psychometric techniques developed in LT for examining the quality of these SLA tools. Thus, most of the articles do not provide any information with regards to reliability, validity and items of the tests and tasks.
  • 6. 4.1.2. Contribution of SLA to LT This analysis is based on the articles appearing in three issues of ``Language Testing'' (Vol. 16, 1999 Ð Freedle and Kostin, 1999; Laufer and Nation, 1999 and Papjohn, 1999 in issue No. 1; Belgar and Hunt, 1999; Kormos, 1999; Schmitt, 1999 and Brown, 1999 in issue No. 2; and Marquardt and Gillam, 1999; Holm et al., 1999; Windsor et al., 1999; Cardell and Chenery, 1999 in issue No. 3). The following patterns were observed regarding the contribution of SLA to LT.  In terms of identifying components from SLA for the elicitation procedures, most articles in LT are based on phenomena identified in SLA research such as vocabulary acquisition or oral performance but applied to testing situations.  In terms of identifying tasks for assessing language, there is no evidence that SLA has any impact on the methods used for assessment as all LT articles employ tests exclusively and not any other methods of language elicitation.  In terms of accommodating differences found in SLA in designing tests, there is hardly any evidence of this as most studies refer to the learners as homogenous groups. The only article that addresses the issue is the one by Brown (1999) which discusses the relative importance of language background of individual test takers to test performance.
  • 7. 4.2. Relevance: The following views were expressed by researchers in SLA regarding the relevance of LT. Interviewee 1: In theory LT is very relevant but not in practice, there is much more that could be done. Not enough assessment of inter-language, pragmatics, and LT holds a narrow view of language. Language tests do not address the complexity and the dynamics of language in a non-linear way Interviewee. 2: LT determines in advance what people should know, yet this is not what test takers really do. Language testers are not paying enough attention to the learner and his/her attitudes towards tests Interviewee. 3: LT is based on old views of SLA and does not relate to practical needs. It does not relate to language processing and holds on to the black-box approach pretending that there is no idea of what's going on within that machine. Also, there is lack of attention to current needs in Europe Interviewee. 4: LT is very relevant, especially in the political and curricular context and has a pivotal power position. Yet, the profession tends to take whatever assessment or testing practices prevail as the `natural' situation, thus testing absolutely everything we do, from research foci, to research design, to the reporting and analyzing of results, to potential implications for curriculum development, teaching practice, materials development, and, last but certainly not least, to major educational and social policy decisions. The influence/relevance of assessment is more crucial now than ever since things do not change unless some key assumptions and practices in assessment/testing will change and can become institutionalized. A need to devise more sophisticated assessment practices and to depart from what has been the customary playground, conceptual apparatus, methodology, and power position of the `psychometric folks'
  • 8. Interviewee. 5: SLA researchers and LT researchers work in parallel to solve problems of learning second languages. SLA researchers should be better informed about work in LT as many breakthroughs in language testing theory are centrally relevant to SLA Interviewee. 6: LT is not sensitive to teaching and learning and to the educational context. SLA researchers have inadvertently violated certain basic principles of sound testing. By the same token, language testers have sometimes been culpable of preparing tests that violate basic SLA principles (such as the concern for contextual variables) that would have made the tests more valid Interviewee. 7: LT is not sensitive to the needs of individual instructors and to the realities of running language programs. The emphasis on statistics by testing communities sti¯es creativity in test development Interviewee. 8: Not relevant as long as it is testing and not assessment. A need to work for relevance Without locating and understanding language testing within a broad spectrum of assessment situations, those concerned with the work of language testing are in danger of, on the one hand, attempting to colonise areas that are inappropriate (by expanding the use of tests and testing procedures to contexts where other forms of assessment may be more useful for teachers, learners, parents and administrators), and, on the other, ignoring their potential contribution to other (often highly complex) arenas in which language assessment is taking place (by failing to alert naive practitioners of language assessment to the technical dimensions of what they are doing or propose to do).
  • 9. Conclusions & interpretations  The evidence of interfaces and relevance: …limited in its scope,… limited interfaces between SLA and LT; …by and large LT has limited relevance to SLA...LT is not being used by SLA researchers; in terms of construct, procedures, applications of findings,& not in terms of applying criteria for developing quality data-collection instruments of reliability and validity.  From the analysis of the LT research it is evident that language testers do refer and consider topics and issues of SLA in test development and use, yet the type of research is still restricted to tests as the main data-collection tool while other procedures used by SLA researchers for data collection are not used by language testers.  Further, in most cases, language testers do not consider variations among learners.  Then there is the view that LT is not relevant as it does not address the process by which learners process language, as language tests determine in advance what will be tested based on what testers believe learners do, while learners process language differently.  Language testers need to pay more attention to the learners, to the test takers who take the tests and to the needs of institutions and groups where tests are needed.  Today as well, there is limited interaction and relevance between LT and SLA. It is difficult to explain why such a situation exists, what reasons there are why two disciplines that have common goals operate in such isolated ways, do not cooperate and do not build on the knowledge gained by each of the fields.
  • 10. Recommendations & implications  ..broaden and update the scope of assessment .Or constructing theories of LT on current views of SLA theories of assessing more components of language, such as interlanguage variation, language processing, pragmatics, and aspects that address the context, complexity and dynamics of language.  .., approaches to LT need to be determined not only by theories of language abilities but mostly by what learners actually do and how they process language.  ..involve the test takers in decisions about the type of assessment procedures they feel comfortable with.  ..language testers need to market their findings and make SLA researchers aware of and informed about the work of language testers.
  • 11.  Language testers need to be aware of the phenomenon whereby research in LT viewed as the province of a limited group of people who deal with psychometric issues and statistics and, therefore, it is difficult to gain insight into LT knowledge that may be very relevant.  …demonstrate to SLA researchers where and how research in LT can be useful and relevant.  Language testers need to be more attuned to the educational context where language tests are being used most often, as part of teaching and learning and the realities of language programs.  …language testers need to broaden their views of testing to assessment; from the psychometric and statistical approaches and from tests to different types of assessment procedures.  …, language testers need to work for relevance.
  • 12. Second language interaction: current perspectives and future trends Since the late 1960s, the language testing field has earnestly occupied itself with the nature of the L2 construct. Kunnan (1998) states that findings advanced by most researchers support the claim that L2 is multicomponential; and indicates that research so far has not been able to identify the specific components underlying this multidimensional construct and how these components interact. Douglas (2000: 25) makes a similar point: ‘[a]s has become clear in recent years through empirical studies conducted by language testers and others, language knowledge is multicomponential; however, what is extremely unclear is precisely what those components may be and how they interact in actual language use’. In short, while theoretical arguments and empirical evidence have ascertained the multicomponentiality of the L2 construct, consensus is absent regarding the nature of these components and the manner in which they interact.
  • 13. II Interaction in L2 use Given the lack of consensus on the specific components that underlie the L2 construct and how these components interact in language use, it is not surprising that different models exist (see accounts of these models in McNamara, 1996; Chalhoub-Deville, 1997; Chapelle, 1998; Chalhoub-Deville and Deville, in press).  According to Alderson and Banerjee (2002), however, the model that represents the state-of-the art in the field is the communicative language ability (CLA) model (Bachman, 1990; Bachman and Palmer, 1996). Moreover, CLA has recently found recognition in the general measurement literature (Bachman, 2002a). Consequently, discussion of the nature of interaction in L2 use focuses mainly on the CLA representation.  In discussing interactional communicative ability, Bachman (1990:302) cites the work of :  Kramsch (1986) and states that the ‘ “interactional” aspect of the approach is rooted in the same views of language and language use that have informed communicative language teaching, an aspect that Kramsch (1986) has referred to as “interactional competence”’.
  • 14. In defining and explicating interactional competence, Kramsch (1986: 367) writes: successful interaction presupposes not only a,  external context of communication or’ shared knowledge of the world, but also the construction of a shared  internal context or ‘sphere of inter-subjectivity’ that is built through the collaborative effort of the interactional partners.  Bachman focuses on the abilities the individual possesses and one surmises that the context is important to the extent that it helps draw out those intended abilities.  Bachman emphasizes both a construct-based and a task-based approach to test design, which gives tasks (or contexts) equal prominence in test design and interpretation. Nonetheless, Bachman still posits a distinction between the abilities targeted and the context in which they are observed.  This position diverges from that advocated by proponents of interactional competence, who view the language use situation primarily as a social event in which ability, language users, and context are inextricably meshed.  In considering the interactional competence perspective, the following representation should be considered: ‘ability – in language user – in context’. This representation claims that the ability components the language user brings to the situation or context interact with situational facets to change those facets as well as to be changed by them.
  • 15. The facet aspects of the context the language user attends to dynamically influence the ability features activated and vice versa. This perspective, in essence, maintains that ability and context features are intricately connected and it is difficult or impossible to disentangle them.  Young (2000) maintains that because both ability models and interactional models consider a learner’s abilities as well as contextual features, the differences between the two might appear trivial. He argues, however, that these models represent fundamentally different perspectives on language use.  Young (2000: 9) disagrees with the notion that language use is the manifestation of ‘a trait or bundle of traits’ that can be assessed in an individual. He also does not support the notion of a distinction between ability and performance.
  • 16. In conclusion  In its current cognitive formulation, CLA portrays a link between ‘ability – in language user’ and ‘the context’, underscoring the two as important but separate entities.  The social perspective of interaction, however, rejects that postulation in favor of one that positions ‘ability – in language user – in context’. The social interactional perspective compels language testers to address two fundamental challenges: 1. · amending the construct of individual ability to accommodate the notion that language use in a communicative event reflects dynamic discourse, which is co-constructed among participants; and 2. · the notion that language ability is local and the conundrum of reconciling that with the need for assessments to yield scores that generalize across contextual boundaries.
  • 17. III Social-cognitive construct representation The interactional, co-constructionist perspective is grounded in anthropology, sociology, and developmental psychology. It is inspired by the work of the Russian developmental psychologist Lev Vygotsky (1896-1934; Vygotsky, 1978) and his followers. Sfard (1998) has characterized the debate among learning researchers in terms of two metaphors:  the acquisition metaphor (AM) that views the learner as developing “basic units of knowledge that can be accumulated, gradually refined, and combined to form ever richer cognitive structures” (p. 5). ..the learner is viewed “as a person who constructs meaning” (p. 5).  the participant metaphor (PM) which opposes the view that knowledge is a stand-alone entity and emphasizes that knowledge construction is contextual, culturally embedded, and socially mediated. Along these lines, Resnick (1994) advances the term ‘situative cognition’.
  • 18. IV Bridging the conceptual divide 1 Score inference and generalization Evaluating examinees’ performances according to the PM or the social interactional perspective offers a serious challenge to language testers in terms of the generalizability of scores. If internal attributes of ability are inextricably enmeshed with the specifics of a given situation, then any inferences about ability and performance in other contexts are questionable. The idea of transfer of conceptual schemes is at the heart of the issue of generalization, with which learning theorists and researchers also struggle. Schoenfeld (1999) makes a similar point. He argues that transfer explains the manner in which learners are capable of applying knowledge and skills in situations other than those in which they were developed.
  • 19. In addition to transfer, researchers discuss the notions of practice and familiarity with a given context to help explain the generalizability of performances across contexts. …In other words, the synergy between someone’s ability attributes and the features of a given context is not random and not likely to be transient. Theory and research Chalhoub-Deville (1997) differentiates between theoretical models and assessment frameworks. She states that theoretical models, such as CLA, provide a comprehensive representation of the L2 construct and include aspects of context. She adds that assessment frameworks, which are more manageable but still derived from theoretical models, focus on components deemed salient in the given assessment context. Increasingly, researchers are calling for the development of more local theories that examine abilities as engendered by the features of specific contexts. Moreover, and what is perhaps more significant, some researchers question the feasibility and value of connecting these local theories into a general, comprehensive one.
  • 20.  In short, the ideas presented by social interactionalists are rich and deserving of deliberation and development in order to glean a richer understanding of the nature of interaction in the L2 construct.  In addition to considering the theoretical implications of the social interactional paradigm, language testers need to deliberate the orientation of the social interactional research as well.  Social interactional investigations would consider focused hypotheses of the complex interaction of linguistic and nonlinguistic knowledge, cognitive, affective, and conative attributes engaged in particular situations.  A good starting point for such research would be a ‘close analysis of naturally occurring discourse and social interaction [to] reveal the standards that apply in reality in particular settings’ (McNamara, 1997: 457).
  • 21. 6 Testing methods in context-based second language research Dan Douglas In an important paper J. B. Carroll (1968) articulated very succinctly, albeit unintentionally, the relationship between language testing (LT) and second language acquisition (SLA) research by defining a test as "a procedure designed to elicit certain behavior from which one can make inferences about certain characteristics of an individual" (p. 6). If some of the "characteristics" include the linguistic (as they do), then the link between LT and SLA is clear: A language test is an SLA elicitation device. However, it has largely been the case that language testing concerns and SLA concerns have tended to develop in their separate ways.
  • 22. In a discussion of mutual influences between SLA research and language resting (LT), Bachman (1989) notes a growing consensus among language resting researchers that language ability consists of a number of distinct but related component abilities. This multicomponcntial view of language ability, Wrestling with "context" in LT and SLA research Since the early 1 980s, a recurring theme in second language acquisition studies has been the role of context both in the acquisition of the language code itself and in the development of communicative ability(for discussion see Ellis & Roberts 1987; Larsen-Freeman Long 1991). Context has been defined in various ways by researchers, and has been associated with such notions as "situation, setting," "domain, environment, task, " 'content, "topic," "schema, " frame, “script and "background knowledge." Traditionally, context has been seen as containing both linguistic and extralinguistic elements, and both types of elements may be analyzed at a micro or a macro level.
  • 23.  A micro-linguistic analysis, for example, might focus on subjects' ability to supply plural morphemes in obligatory contexts, such as "There are two…… in the nest," whereas  A macro-linguistic analysis might focus on the discourse structure of an interaction and its effect on syntactic choice.  A micro-situational analysis might be concerned with lexical choice with different interlocutors, whereas  A macro-situational analysis might study learning in classrooms versus learning in "natural" settings. For the purposes of this discussion, I (Douglas) Will collapse the distinction between linguistic and situational context as irrelevant to the main thesis, which is that context is a social-psychological construct that results from attention to a variety of external and internal factors. A confounding factor in achieving more specificity in the study of context and SLA has been a concomitant uncertainty a bout the nature of language knowledge itself. As has become clear in recent years through empirical studies conducted by language testers and others, language proficiency is multicomponcntial (see, e.g., Canale 1983; Sang, Schmitz, Vollmer, Baumert, & Roeder 1986; Bachman, 1990)
  • 24. As Hymes (1972) originally formulated communicative competence involves judgments about what is systemically possible, psycholinguistically feasible, and socioculturally appropriate, and about what is entailed in the actual accomplishment of a particular linguistic event. …about how particular language tasks may be accomplished, but also includes events not within the user's control, thus bringing into the equation of language proficiency many situational and interactional factors that we normally think of as chance … ..for Hymes, "competence" is more than grammatical knowledge; it has to do with rules for use: "Competence is dependent upon both [tacit] knowledge and [ability] for use" . It is not ..Chomskian competence; : "Social life has affected not merely outward performance, but inner competence itself" (p:274). Hymes's own SPEAKING mnemonic:  Situation: physical and temporal setting, psychological and cultural scene  Participants: speaker/sender, addressor, hearer/receiver/audience, addressee  Ends: outcomes, goals  Act sequence: message form and message content  Key: tone, manner  Instrumentalities: channels, codes  Norms: norms of interaction, norms of interpretation Genres: categories of speech events
  • 25. Context (as a social-psychological construct)  A context is constructed by the participants in the communicative event. A salient feature of context is that it is dynamic, constantly as a result of negotiation between and among the interactants as they construct it, turn by turn. …context is not an object, but a dynamic social-psychological accomplishment, and that therefore, as Erickson and Shultz (1981) put it, in a paper instructively titled "When is a context?" :  "The production of appropriate social behavior from moment to moment requires knowing what context one is in and when contexts change as well as knowing what behavior is considered appropriate in each of those contexts" (p. 147).  In the study of context and its effect upon SLA and use, it is important to take account of those features of external context that participants attend to in constructing an internal context (with the proviso that we may never be able to know with any precision what those features are because such assessment is dynamic, internal, highly personal, and to some degree unconscious). Gumperz ( 1976) has referred to "contextualization cues changes in voice tone, pitch, tempo, rhythm, code, topic, style, posture, gaze, and facial expression — which are culturally conventional, highly redundant signals that Interactants attend to in their mutual construction of context.
  • 26. Discourse domains  Selinker and Douglas ( 1985) proposed the notion of discourse domain to clearly distinguish the idea of external features of context from the internal interpretation of them in the learner. More recently they have defined discourse domain as a cognitive construct created by a language learner as a context for interlanguage development and use (Douglas Selinker 1989, 199S).  Douglas and Selinker ( 1985) hypothesized that language test takers create intelligibility in language tests either by engaging already existing discourse domains or by creating temporary domains for the purpose of "making sense" of a test item.  This notion is related to a long held suspicion that test raters may not be interpreting the test items in the way intended by the test constructors.
  • 27.  In Hymes's terms, the interpretation of the context would interact with judgments of  what was formally possible, feasible, and appropriate and  what the accomplishment of the communicative event would entail.  A learner might not possess the grammatical, psycholinguistic, or sociolinguistic "wherewithal" to carry out the linguistic task, or be able to employ effective strategies for dealing with the situation. ( competence includes Knowledge & ability)  As Widdowson ( 1989) suggests, an individual might possess knowledge of, say, appropriacy, but lack the ability to be appropriate. This notion is related, too, to the distinction between competence and control made by Bialystok and Smith (1985).  The effectiveness and accuracy of comprehension and production would reflect the variable interaction among the components of competence, which in turn would be determined by "domain strength," or, we might say, by how comfortable, knowledgeable, and in control the language user felt about the interpretation.  A result of discomfort might be rhetorical/grammatical Inaccuracy, a sociolinguistically inappropriate response, a lack of precision, or a lack of cohesion.  On other hand, a high degree of comfort with the interpretation could produce more accuracy, precision, and so on, even (and this is crucial) if the interpretation were "wrong. "
  • 28. Testing methods in context-based second language research  Douglas and Selinker (1991, 1993) have explored this area in two empirical studies in which the goal was to investigate the effect of manipulating test method facets (Bachman 1990) — such as the instructions, vocabulary, contextualization, distribution of information, level of abstraction, topic, and genre — on outcomes.  when performing a language test task in their own field of study (mathematics), examinees will not necessarily produce "better" language — but they will produce different language, allowing score users to make a more useful interpretation of the performance in terms of a candidate's language ability for the specific purposes they are interested in.  Continuing with the findings of Douglas and Selinker, raters of the speech protocols seemed to be rating a category of "rhetorical complexity" but calling it "grammar," since this was the only relevant category available. This was evident only in subsequent rhetorical/grammatical interlanguage analyses.  Finally, evidence was found (albeit inconclusively) that the field-specific tests provided a measurement advantage over the general test in predicting judges' ratings of candidates' suitability for assignment to teaching duties, using the results of a performance rest as a criterion. Perhaps the most important outcome of this research has been the realization that the job of producing contextualized language tests that present relevant features of context to engage discourse domains in test takers is going to be punishingly difficult.
  • 29. As a summary suggestions concerning testing methods as for context-based LT and SLA research: l. Any factor researchers change in the test domain can lead to changes in an interlanguage user's perceptions and assessment of the communicative situation, and thus to changes in interlanguage performance on the test. 2. Rather than attempting to minimize the effects of test method facets on the interpretation of results, LT and SLA researchers should make use of them to design tests for particular populations. 3. Context-based language tests (i.e., those designed with the express intent of engaging specific discourse domains) can be constructed by manipulating a number of test method facets, such as test environment (administration, personnel, and location), Instructions (domain specific reasons for carrying out the test tasks), and language (sub technical as well as technical language, context-embedded discourse, level of precision, field-specific topic, authentic genre, and pragmatic and sociolinguistic domain-related features). 4. There is likely to be a threshold level of context-based method facets, or situational authenticity, necessary for domain engagement, and a corresponding threshold level of communicative competence necessary to respond to the context. 5. Subjects taking context-based tests will be more likely, as a result of domain engagement, to focus on content than on form. 6. Context-based test tasks may be inherently more complex than those on general purpose tests. 7. The interlanguage produced by subjects taking context-based tests will not necessarily be more complex or even more target like, compared with that produced in general tests, but performance on context based tests will allow for interpretation of language ability for specific purposes. 8. Raters or judges of context-based tests may need to have some sense of the content accuracy of responses, and may rate context-based test in SL responses more conservatively as a function of not knowing how accurate the responses are. 9. Rhetorical/grammatical interlanguage analysis may be necessary to disambiguate gross subjective ratings on context- based tests. 10. Context-based tests may provide more useful information than general purpose tests when the goal is to make field- specific judgments about subjects' interlanguage communicative competence.
  • 30. 4 Strategies and processes in test taking and SLA Andrew D. Cohen A process approach to test taking Since the late 1970s, interest has slowly begun to grow in approaching second language (L2) testing from the point of view of the strategies used by respondents going through the process of taking the test (e.g., Cohen Aphek 1979…Anderson 1989; Nevo 1989). By the 1990s, L2 testing textbooks acknowledged this concern as a possible source of insights concerning test reliability and validity (Bachman 1990; Cohen 1994a). Tests that are relied upon to indicate the comprehension level of readers may produce misleading results because of numerous test-wise techniques that readers have developed for obtaining correct answers on such rests without fully or even partially understanding the text. Students may get an item wrong for the right reasons or right for the wrong reasons. Discovering that a respondent used poor logic in attempting to answer a reading comprehension item may be of little interest to the test constructors if the problem resides solely with the respondent.
  • 31.  For example, respondents may plod laboriously through a text only to find that once they reach the multiple choice questions, they have forgotten most of what they read or have failed to focus adequately on those elements being tested. In such a case, the strategy of studying the questions carefully before reading the text might have been more beneficial.  In a traditional achievement testing situation, the testers usually check for control of language that the respondents have been taught. The respondents know that there is a premium put on better performance. Whereas in SLA research tasks the respondents can get points for communication  Then in spite of inaccuracies, and their performance does not usually affect their grade in a language course, in instructional achievement testing the respondents must perform accurately, often under time constraints, and are held accountable for their responses.  In recent years, the distinction between assessment for the purposes of SLA research versus assessment for instructional purposes has lessened somewhat. More and more tasks and tests are being used interchangeably. As language tests have become a greater part of SLA research, there has been a growing concern for the reliability and validity of such measures.
  • 32. What is meant by test-taking strategies? Language use strategies are mental operations or processes that learners consciously select when accomplishing language tasks. These strategies also constitute test-taking strategies when they are applied to tasks in language tests. ….. the notion of strategy implies an element of selection. Otherwise, the processes would not be considered strategies. Strategies vary according to context.  One strategy is to opt out of the language task at hand (e.g., through a surface matching of identical information in the passage and in one of the response choices).  Another is to use shortcuts to arrive at answers (e.g., not reading the text as instructed but simply looking immediately for the answers to the given reading comprehension questions).  In such cases, the respondents may be using test wiseness to circumvent the need to tap their actual language knowledge or lack of it, consistent with Fransson's (1984 ) assertion that respondents may not proceed via the text but rather around it. In some cases, quite the contrary holds true.
  • 33. The ability of learners to use language strategies has been referred to as their strategic competence — a component of communicative language use (Canale & Swain 1980). This model puts the emphasis on "compensatory" strategies — that is, strategies used to compensate for or remediate a lack in some language area. Bachman (1990) provided a broader theoretical model for viewing strategic competence. Bachman and Palmer (1996) refined the Bachman (1990) categories somewhat. Their current framework includes an assessment component, whereby the respondents (in the case of language testing) assess :  which communicative goals are achievable and  what linguistic resources are needed;  a goal-setting component, wherein the respondents identify the specific tasks to be performed; and  a Planning component, whereby the respondents retrieve the relevant items from their language knowledge and formulate a plan for their use in a response. They may use lexical avoidance, simplification, or approximation  when the exact word escapes than under the pressure of the test situation or  because they simply do not know the word well or at all.
  • 34.  Thus, in keeping with the Bachman and Palmer mode, when respondents are given a situation in which to perform an oral role play :  They may assess the situation and identify the information that is needed in that context.  They may also set their general goals.  They may plan their specific response and go about retrieving from their language knowledge the grammatical, discourse, and sociocultural features needed for the role play.  Then at some point they execute the role play. After they have finished, they may again perform an assessment to evaluate the extent to which the communicative goal was achieved.  As is the case with any theoretical model, test takers may make differential use of the components of this model when performing specific testing tasks. Hence, there are respondents who might not assess the situation before starting the role play.
  • 35. The use of verbal report in identifying test-taking strategies The field of research methods has supplied us with suggested approaches for looking at test taking. 1. One means is through observation of what respondents do during tests. 2. Another means is through designing items that are assumed to require the use of certain strategies (e.g., cloze items calling for anaphoric reference) and adding up the correct responses as an indicator of strategy use. 3. A third approach is through the use of verbal report, while the items are being answered, immediately afterward, or some time later on. Verbal report techniques have constituted a major tool in the gathering of data on test-raking strategies. Such techniques were initially developed in first language and then second language acquisition research in order to study the processes of reading and writing. Verbal reports have helped determine how respondents actually take rests of various kinds (Nickerson 1989; Norris
  • 36. Strategies for taking tests of reading and writing In considering strategies on tests of reading and writing skills, 1. The focus first on two more indirect testing formats, multiple choice and cloze, 2. strategies for three more direct formats, namely, summarization tasks, open-ended questions, and compositions. The chapter closes with several suggestions that may lead to more effective test taking. Unless it is specified that the activity constituted a research task, the tasks and tests for which the respondents reported strategy use contributed to their course grades in language classes. MULTIPLE CHOICE TEST FORMATS Investigating a standardized test of English reading MacKay also gave an example of an item missed for the wrong reasons. The statement "The cat has been out in the rain again" had to be linked to one of three pictures, The student perceived the dotted wallpaper as snow and decided that this picture was of the exterior of the house. Thus, he gave the dripping raincoat as the correct answer. Once the child had perceived the wallpaper as snow and thus had eliminated the third picture, his selection of the first picture, the dripping raincoat, rather than the second, was perfectly reasonable — even though cats do not wear raincoats. If this kind of logic was shared by numerous respondents, it might suggest the need to improve the pictures in order to eliminate any ambiguities in them.
  • 37. The bird built his own house to a picture, a student chose a nest of twigs with eggs in the middle over a wooden birdhouse because he claimed that some big birds could not fit in the hole of the birdhouse. MacKay noted that the student chose the right picture for the wrong reason. The student missed the element that people, not birds, are responsible for building wooden birdhouses with perches. Another study of test-taking strategies among non-natives (Anderson et al. 1991) revealed that respondents used certain strategies differently, depending on the type of question that was being asked. For example, the strategies of "trying to match the stem with the text" and " guessing" were reported more frequently for inference questions than for direct statement and main idea question types. The strategy of "paraphrasing" was reported to occur more in responding to direct statement items than with inference and main idea question types.
  • 38. READING COMPREHENSION TEST ( strategies for Multiple Choice) 1. Read the text passage first and make a mental note of where different kinds of information are located. 2. Read the questions a second rime for clarification. 3. Return to the text passage to look for the answer. 4. Find the portion of the text that the question refers to and then look for clues to the answer. Look for answers to questions in chronological order in the text. 6. Read the questions first so that the reading of the text is directed at finding answers to those questions. 7. Try to produce your own answer to the question before looking at the options that are provided in the test. 8. Use the process of elimination — i.e., select a choice not because you are sure that it IS the correct answer, but because the other choices do not seem reasonable, because they seem similar or overlapping, or because their meaning is not clear to you. 9. Choose an option that seems to deviate from the others, is special, is different, or conspicuous. 10. Select a choice that is longer/shorter than the others. 11. Take advantage of clues appearing in other items in order to answer the item under consideration. 12. Take into consideration the position of the option among the choices (first, second, etc.). 13. Select the option because it appears to have a word or phrase from the passage in it — possibly a key word. 14. Select the option because it has a word or phrase that also appears in the question. 15- Postpone dealing with an item or a given option until later. 16. Make an educated guess — e.g., use background knowledge or extra textual knowledge in making the guess. 17. Budget your time wisely on this test. 18. Change your responses as appropriate — e.g., you may discover new clues in another item.
  • 39. CLOZE TEST FORMATS Research regarding strategies for taking cloze tests is of interest in that it has helped to determine whether such tests actually measure global reading skills, as they are commonly purported to do. As more studies have been undertaken on the cloze test, it has become clearer that the instrument elicits more local, word-level reading than it does macro- or discourse- level reading (Klein-Bralev 1981; Alderson 1983; Lado 1986), contrary to the claims of its early supporters, who have maintained that cloze assesses global reading {see, e.g., Chihara et al. 1977; CháveL-Oller et al. 1985)…. The results demonstrated that although the better readers were more likely to use macro-level schemata and strategies in completing the cloze and also did better on the completion task as a whole, all respondents favored micro level reading at the sentence level Thus, the research on strategies in taking cloze tests would suggest that such tests assess local-level reading more than they measure global reading ability. Furthermore, such tests are more likely to test for local-level reading when they are in a foreign language (see, e.g., MacLean & d'Angleian 1986). More direct formats SUMMARIZATION TASKS Whereas more direct formats for testing, such as text summarization, are less likely than indirect formats to elicit test- taking strategies that take the place of genuine language use strategies, responses to such measures are still influenced by test-wiseness. As long as the task is part of a test, students are likely to use strategies that they would not use under non- test conditions. In the case of a summary task, the respondent is invariably summarizing a text for a reader who already has a notion of what the summary should look like; therefore, the respondent is reacting to a set of perceived and read expectations on the part of the reader. In the real world, we usually summarize a text for our own future use or for the benefit of someone who has not read it, in which case the set of expectations of the summarizer may be quite different.
  • 40. OPEN-ENDED QUESTONS AND COMPOSITIONS Like summarization tasks, open-ended tasks allow respondents to copy material directly from a text in their response, so that raters cannot tell whether the respondent in fact understands the material. Such copying may produce linguistically awkward responses. For example, in a study of 1 9 college-level learners of Hebrew 1.2 that involved a retrospective verbal report one week after the learners took their final exam, it was found that students lifted material intact from an item stimulus or from a text passage for use in their written responses (Cohen & Aphek 1979). Conclusion This chapter has examined process approaches to language testing, which have usually entailed the use of verbal report techniques to better understand these processes and the test-taking strategies that respondents use. There are not yet abundant data linking the specific use of test-taking strategies with success or failure on language tests. On the other hand, the use of qualitative methodologies such as verbal report provides a valuable source of information — perhaps the most focused possible — on the strategies respondents used in their responses and why they did so. Verbal report can help us see what items are actually testing, aiding us in making decisions about which items to keep and which to throw out. One could go so far as to say that it is now close to essential to have verbal report as part of prertesting/piloting. If a respondent has legitimate reasons for marking an item wrong, then the item needs to be rewritten. The emphasis in this chapter has been on the relationship between the characteristics of the test task and the strategies used, and an the connection between testing methods and SLA research — especially on the types of investigations needed to provide information as to which testing methods would be potentialy more or less reliable and valid for SLA research.
  • 41. THANK YOU SO MUCH This is my Painting