‘Teaching MUET, not English’: A Study of the Washback effect of the
Malaysian University English Test (MUET)
Mohana Nambiar & Shamara Ransirini
Examinations, tests and all other forms of assessment play a crucial role in the
lives of Malaysians, as the results of these high stake assessments seal the fate
of many. A leading newspaper columnist says that any Malaysian school teacher
is bound to claim that, “… assessment is as fundamental to the school curriculum
as oxygen is to life” (Vasugi, 2010:E7). In other words, a curriculum that does not
incorporate tests and examinations is untenable in the eyes of many
educationists. Given that there definitely exists an undeniable relationship
between teaching/learning and testing, the issue is whether such a bond is
beneficial or otherwise. The phenomenon of how tests influence teaching and
learning is commonly described as “washback” or “backwash” in language
instruction and it can be both harmful and beneficial (Hughes, 2003).
The Malaysian University English Test (MUET), which was introduced in 1999, is
an attempt to arrest the decline in the standard of English among Malaysian
students. Though the English language is taught as a second language to
students throughout their 11 years of primary and secondary school education,
the target has not been achieved. Upon entry into tertiary institutions, many
students are unable to perform as they lack a degree of English that will facilitate
them in handling academic tasks, especially since most of the reference
materials are in English. The problems universities face in handling the
inadequate English language proficiency of students have aroused grave
concern from many parties (Asmah, 1992).
Hence, the main objective of MUET was to "bridge the gap in language needs
between secondary and tertiary education" (Malaysian Examination Council,
2001). The test was obviously viewed as the primary tool through which changes
in teaching and learning practices of English at the pre-tertiary level can be
stepped up. In other words, the single most important goal in implementing
MUET has been to use it to as what Pearson (1988: 101) calls a “lever of
change”. Though in many countries, national tests are very often used as primary
devices through which changes in the educational system are introduced (see
Cheng and Curtis, 2004), researchers have argued that such change cannot be
assumed (Shohamy, et al 1996). Reports on MUET results indicate that
beginning 1999 up to 2007, less than 1% of the candidates had achieved the
highest score of Band 6 while more than 50% had only achieved the two lowest
bands of competency.
This indeed raises the need to explore the type of impact or washback effect
MUET has on English learning and teaching. Recent literature indicates that
washback is complex and in need of more empirical investigation (Alderson and
Hamp-Lyons, 1996, Watanabe, 1996). The purpose of this paper is to report on a
study carried out to investigate how Malaysian students and English teachers
perceive the impact of MUET on their learning and teaching, respectively.
MUET is a criterion-referenced proficiency test like the IELTS and TOEFL. It has
four components: listening, reading, speaking and writing. Out of the four, the
reading component is given the highest weightage of 45% of the overall, followed
by writing with 25% (This study was carried out before the format was slightly
altered in 2007; the current weightage is 40% and 30% respectively). The two
remaining components, listening and speaking are given equal weightage of 15%
each. The examining authority, the Malaysian Examination Council (referred to
as MEC hereafter), justifies the unequal distribution as necessitated by the
nature of students' target use of the English language at tertiary level. Reading
and listening components both rely heavily on multiple-choice items (some short
answers have been included into the listening section in the new format), while
the writing and speaking components use more direct methods.
MUET is a prerequisite for local university entrance with the minimum
qualification being Band 1, which is attainable to all those who take the exam.
However different universities have differing entry qualifications, depending on
the programs students wish to pursue.
Commonly known as washback or backwash, the term refers to the impact
testing has on teaching and learning. Messick (1996:241) theorises washback as
"the extent to which the introduction and use of a test influences language
teachers and learners to do things they would not otherwise do that promote or
inhibit language learning". Wall (1997) distinguished between test washback and
test impact in terms of the scope of the effects. According to Wall, impact refers
to “… any of the effects that a test may have on individuals, policies or practices,
within the class room, the school, the educational system or society as a whole”
while washback is limited to “the effects of tests on teaching and learning” (Wall,
The assumption underlying washback is that if a test is a high-stakes test, as it is
in the case of MUET, preparation for it can dominate all teaching and learning
activities. This is fine as long as the content and format of the test are not at
variance with the objective of the test, which in this case is to promote English
proficiency. Under such circumstances, the test is believed to bring about
positive washback. On the other hand if teachers exhibit a tendency to “teach to
the test” and students prefer to focus only on skills and content that would be
tested, this would be considered as negative washback as it does not promote
the acquisition of skills specified in the curriculum. It instead leads to what is
known as a ‘narrowing’ of the curriculum.
4. The study: purpose and design
Despite its importance, it is only recently that washback has been investigated
empirically (Cheng, Watanabe & Curtis, 2004). The small body of research to
date suggests that washback is a highly complex phenomenon and that test
developers alone cannot engineer desirable changes. Alderson (2004) believes
strongly that teacher-related factors play a crucial role - the way teachers prepare
students for the test, their beliefs about teaching and learning, the degree of their
professionalism, the adequacy of their training and of their understanding of the
nature of and rationale for the test. By the same token, students’ or the test-
takers’ perceptions about the test can also critically impact on the washback
effect of the test.
As researchers have cautioned, washback cannot be assumed; it needs to be
researched. In Malaysia, there is a paucity of research related to washback; one
study that comes to mind is Perumal’s (2009) research where he investigated
how washback affects the classroom behavior of teachers who are teaching a
newly-introduced course – English for Science and Technology (EST) for upper
secondary students. Specifically he looked at the effects of the EST examination
on what and how teachers teach and his findings suggest that there are both
positive and negative washback on the teaching of EST in Malaysian schools.
The positive impact is from the test format but the negative effects cannot be
attributed to the test alone; other sources have also contributed. Perumal (2009)
says that the school policy in encouraging the use of past years’ exam papers
and sample question papers for classroom practice as well as the expectations of
parents may have also influenced teacher conduct in the classroom.
Given the crucial role that examinations play in the lives of Malaysians and that
the MUET is a high stakes test, this study has focused on the perceptions of
both students and teachers towards MUET in order to study the washback
effects of the test. Specifically the study aims to address two questions:
1. How do Malaysian students and English teachers perceive the impact of
MUET on their learning and teaching respectively?
2. How can the positive test impact of MUET be maximized?
The data-gathering instruments included a questionnaire for the students, and
one for the teachers; interviews with the teachers as well as classroom
observations. The student sample comprised 108 students from two secondary
schools (one in Kuala Lumpur and the other in Klang) and a matriculation centre
in Kuala Lumpur. Nine teachers responded to the questionnaire as well as the
interview. Non-participant observation was carried out in eight classes for a total
of 17 hrs. The questionnaire data were entered into the statistical package
MINITAB 12.0 and subjected to several statistical procedures while the
observation and interview data were analyzed holistically.
The findings of this study report the complex nature of students’ perceptions of
the test impact of MUET. When asked whether they think MUET has contributed
towards an improvement in their language proficiency, though an overwhelming
majority of 91% (Table 1) responded in the affirmative, surprisingly it is only 58%
of students (Table 2) who felt that their performance in the university will be
enhanced because of MUET.
Table 1: Students’ perception of whether MUET has resulted in English
MUET has resulted in language Improvement 91%
MUET has not resulted in language Improvement 9%
Table 2: Students’ perception of whether MUET will enhance their
performance in university
This raises the issue whether students, while perceiving MUET as improving their
language competency, do not see it as preparing them for the academic
requirements of university life. If so, such a perception might seriously threaten
the success of MUET, as the objective of the test was to enhance students'
language competence for a level appropriate for tertiary education. Among the
teachers, on the other hand, 56% believed their students have improved because
of MUET. However, in the interview they revealed that they were not certain
whether MUET is an exit or entrance requirement. Most supported the view that it
should be made an entrance test.
If it is to be argued that students do improve in their language proficiency but it is
their awareness of how MUET contributes to their performance in the target
situation which is lacking, then measures certainly need to be taken to inform
students of MUET and its objectives. If positive test impact is to be maximized,
the ambivalence regarding the status of the test too has to be cleared.
The findings of this study further report that washback operates on different skills
in different ways as students perceive that MUET has positive impact on certain
skills compared to others (Table 3).
Table 3: Students’ perception of their improvement in the four skills (in
Even though students feel that they have improved in speaking and reading the
most, it is only 5% of them who see reading as the most useful skill in university
Table 4: Students’ perception of the skill most needed in university
This in a way might explain their failure to see how their improvement in the
necessary skills is interconnected with MUET (of course this argument is not
relevant to speaking since it heads the list in both Tables 3 and 4). Similarly, it is
then perhaps not so surprising that only 30 % of the student sample knew the
correct weightage given to the different skills in MUET. Reading is attributed the
highest weightage (45%) but only 5% of the students consider it as the most
useful skill in the target situation (Table 4). Apparently there is a clear
contradiction between the emphasis given by MUET to the different skills and
students’ awareness of it.
However, what emerges is the students’ perception of the primacy MUET has
given to speech. Similarly, in the interviews, the school teachers revealed their
preference of MUET to other national high stakes tests because of its speech
component, which in one interviewee’s words, “is something new and
The implications of the findings thus far are:
1. Since the emphasis MUET gives to the respective skills are supposed to
represent the usefulness of these skills in university, it indicates a lack of
awareness on the student’s part, of the skills that they require in the university.
2. Students are not test-wise.
3. The lack of awareness regarding reading, though it appears surprising, might
be also a result of the students’ overall perception that it is a “manageable skill” ,
meaning it is nothing new, compared to speaking, which, according to the
teachers, has not been tested at the national level before.
4. It could also imply that students de-emphasize the importance of reading as an
academic activity. They might fail to see it as intertwined with writing.
5. However, it is interesting to note that for the students, speaking heads the list
all the time. In the interviews, too several teachers revealed that they like MUET
because it tests speech. In terms of test impact, the importance given to speech
could certainly be considered as positive washback. Nambiar (2005:34) had also
noted that despite the drawbacks associated with the implementation of the
speaking test in MUET, the test format “can bring about desirable backwash
effects since students (and teachers) are known to be extrinsically motivated by
While it is interesting that MUET is viewed as a test which emphasizes speech, it
is also disturbing if such positive test impact on speech comes at the cost of
other skills. These are issues which warrant further investigation and need to be
addressed, if positive test impact of MUET is to be achieved.
On the teachers’ part, they perceived the important components in MUET as
given below in Table 5:
Table 5: Teachers’ perception of the most important components of MUET
It is noteworthy here that none of the teachers perceived listening as important.
The teachers’ perception of the skill most needed in university followed a similar
pattern, which again excluded listening (Table 6).
Table 6: Teachers’ ranking of the skills in order of usefulness in university
This needs to be examined in the light of the student responses where a similar
pattern emerges, as far as de-emphasizing listening is concerned. As Tables 3
and 4 present, there seems to be a tendency among students to de-emphasize
listening. For instance, students perceive teachers as concentrating on the four
skills in the order given in Table 7:
Table 7: Students’ perception of the order of emphasis given by teachers to
Skill Order of emphasis
This order also closely correlates to students’ perception of the skill most needed
in university (see Table 4). Therefore it can be argued, that students’ perceptions
are influenced by their impressions of what teachers emphasize. These
impressions could have arisen as a result of the many speech-related exercises
that teachers put their students through.
However, the teachers stated they focus on the skills in the order given in Table
8, quite different from students’ perceptions.
Table 8: The order of emphasis on the skills given by teachers
These findings are rather interesting because they suggest that MUET has
succeeded in having the kind of test impact it intends to have. However, when
the teachers’ claims are considered along side students’ perceptions, it becomes
apparent that that though teachers claim they focus on the skills in the order
MUET requires them to, the fact that they do so, has not been conveyed to the
Furthermore it is doubtful whether this alone should be taken as adequate
evidence for positive test impact: for it is certainly not merely the skills per se that
matter, but how these skills are taught/practiced in the classroom.
A finding which could possibly throw light on this issue is the type of activities
conducted in class. According to the students, the following activities are
conducted in MUET classes:
1. Public presentations
2. Listening to live speeches
3. Reading texts that require long answers
These findings also indicate that students do engage in listening activities, but
they are not aware of it. Both students and teachers seem to fail to grasp the fact
that listening is interconnected with speaking. And along with the earlier finding,
this also confirms that though listening is carried out in class, both teachers and
students tend to undermine its importance. Hence, it should also be noted that
the tendency to de-emphasize listening could be a result of both the negative
washback effect of MUET and the students’ lack of metacognitive awareness.
The teachers’ response to the same question was:
1. Public presentations
2. Report writing
4. Reading texts that require long answers
The above activities basically capture all four skills as well as some activities
which are not tested directly in MUET such as forums. It appears then that MUET
does have positive test impact on all the four skills: a fact which was further
confirmed in the interviews when about 30% of the teachers admitted to
concentrating on reading and writing, because these components get the bulk of
the marks in MUET. But this data is self reported, and keeping in mind that
classroom observations were too random, it does not allow us to arrive at
conclusions. It does, however, provide us with some insight into what is
happening in the classrooms. For instance, though teachers claimed that they
focus on reading the most, classroom observations showed that reading activities
were modeled largely on the MUET format, that is, teachers would discuss
reading passages (taken from textbooks or previous exam papers) which were
then followed by MCQ questions. One teacher however, brought in “authentic”
material, i.e. copies of the daily newspapers (from the students’ response, it was
quite evident that this was “standard practice”), and engaged in a meaningful and
authentic exercise of reading which also led to writing a lengthy follow-up report.
Similarly, though students and teachers both seem to perceive speaking as a
skill that deserves emphasis, among the classes that were observed, it was only
in one class that the skill was practiced in a refreshing manner that emulates the
use of language in the real-life target situation, i.e. the university. However,
whether the teachers who restricted their teaching practices to the test format
were the “norm” or whether it was just a co-incidence that observation took place
on days they were prepared for only test practice, could not be ascertained due
to the randomness of the observation.
The research findings on materials used by teachers were similar in both
teachers and students self-reported questionnaires. The three most used
1. different text books
2. authentic materials
3. audio materials.
However teachers revealed different attitudes towards the text books prescribed
by their respective institutions (which will be discussed later). They also revealed
that they were guided by text books, the syllabus provided by MEC and the
scheme of work given by the school/tertiary institution.
Teachers and students were also asked for the frequency of certain activities
carried out in class (Tables 9 and 10).
Table 9: Teachers’ report of frequency for certain classroom activities
Classroom activities *Mean
Student interaction in class 3.44
Drilling exercises ( MUET papers) 2.78
Innovative teaching 3.00
*The highest possible mean value is 4.
Table 10: Students’ report of frequency for certain classroom activities
Classroom activities *Mean
Student participation in class 3.28
Drilling exercises ( MUET papers) 2.61
Enjoying the class 3.56
*The highest possible mean value is 4.
It should be noted that the teachers reported the highest mean for student
interaction in class. However, observation proved that certain classes (about
50% of the total observed) were extremely teacher-dominant, with the students’
interaction, both among themselves and with the teachers, occurring only if the
teacher prompts it. As the findings indicate, both students and teachers reported
the lowest mean value for practicing sample MUET papers. Though this can be
taken as a positive wash back effect, classroom observation again revealed that
most teachers use textbooks that strictly adhere to the examination formats.
However, teachers had varying responses to these textbooks, ranging from
dissatisfaction about the book’s heavy exam-oriented approach to
disappointment that the books were not exam-oriented enough! (Please note that
teachers were from three different institutions and hence the text books they
used also varied).
It is also interesting that students reported a positive mean value for enjoying
classes (Table 10). 60% of the students said that in class they would like to try
out activities that do not mirror the test format, 34% claimed they were not certain
and only 6% rejected the idea. This clearly indicates a remarkable degree of
flexibility and willingness on the part of the students to engage in learning
activities that may not be directly reflective of the test but which might result in
more meaningful learning.
This clearly needs to be exploited to maximize the positive wash back effect of
MUET. However, it has to be noted that though the majority of teachers too
expressed a similar response, they all cited the restrictions imposed by time as a
major obstacle. Several teachers talked at length on how various projects they
had started or wanted to start like a class newspaper, journal writing, creative
writing etc. could not be sustained or implemented in the first place because of
When asked about what they thought of MUET, teachers came up with varying
responses, which, by themselves reflect some of the issues that need to be
addressed in dealing with MUET. Given below are a few examples:
• “In reality, we are not helping them to sit for MUET, neither are we helping
them to learn English - we are losing either way!”
• “Current course is worse than a “crash course” in English.”
• “Students’ proficiency won’t improve, but along the way they will ‘pick up’
• “Nobody asks us about our views and experiences - we are told what to
do, but our opinions are not heard.”
• “There should be more transparency in the entire system.”
In their voices can be heard their frustration at been “forced” to teach to the test,
implying that their potential to be creative and innovative has been seriously
hindered by MUET. While 67% teachers said that they enjoyed teaching MUET
classes, the 33% who revealed they did not enjoy teaching, citied the lack of
time, and the restraints imposed by “MUET” as factors impeding their teaching.
6. Conclusions and recommendations
The findings of this study reveal that washback operates on different skills in
different ways: for speaking there seems to be a positive washback while for
listening, it seems to be rather negative. As discussed above, this can also be
attributed to the perception among students and teachers that speech is
important while listening is not.
Based on the findings of this study, the type of wash back effect MUET has on
writing and reading is rather ambivalent. However, as these two skills are
important in the target situation, the test impact on these skills needs to be
maximized as well. The students’ failure to see them as interconnected needs to
be addressed. At the same time more studies, preferably longitudinal (which
include observation) need to be conducted to understand the type of washback
effect MUET has.
Similar to Alderson and Hamp-Lyons’ (1996) study, this study showed that the
type of washback a test has will depend on the teacher as well. The teachers
(and students) in the present study seem to lack a clear understanding of MUET:
for they fail to see MUET as only one format to test language proficiency. It is this
perception that results in teachers feeling trapped and restricted by the test: “we
are teaching MUET, not English”! Teaching would not be restricted by the test
format, if they, as well as the institutions concerned, take cognizance of the
connection between testing and teaching.
As Brown (2002) has pointed out in a similar study in Japan, the findings of this
study too indicate that if a high stakes proficiency test like MUET is to have
positive test impact, teachers need to be essentially guided and trained about the
objectives and purpose of the test. The common complaint of the teachers, the
lack of time, also needs to be addressed, if positive washback of MUET is to be
maximized. Since teachers currently feel alienated from the testing process, it is
imperative that their involvement is increased and their voices heard.
The current ambiguity regarding the status of the test - whether MUET is an exit
or an entrance requirement; what is the minimum level of acceptance for entry -
should be cleared. As Shohamy et al (1996) have argued, the importance of the
test results and the status of the test also influence the type of washback.
It is quite apparent that the findings of this study echo the negative washback
effects that other researchers have warned us of. As early as 1956, Vernon
noted that teachers tended to neglect those activities that do not directly
contribute to doing well on the exams. In effect, “exams distort the curriculum”
(Vernon, 1956:166). According to Fullan(1993), cited in Cheng and Curtis(2004),
teachers are under pressure to attain certain goals but given too little time to
achieve them. In addition, “they are expected to carry forward an innovation that
is generally not of their own making” (Cheng and Curtis, 2004:14). Under such
circumstances, as the findings of this study have shown, teachers feel
disempowered and the intended positive washback does not result.
To sum up, although top-down educational reform strategies such as the
introduction of MUET as a form of leverage is commendable, it is essential to
take into account the roles of other players, in particular the teachers and the
students. If we want the test to bring about maximum benefits, we need to realize
the potential impact of washback and harness it for “what is assessed becomes
what is valued, which becomes what is taught” (McEwen, 1995 cited in Cheng
and Curtis, 2004:3).
Alderson, J.C. (2004). Foreword. In Cheng, L. Watanabe, Y. & Curtis, A. (Eds.)
Washback in language testing. Mahwah, N. Jersey: Lawrence Erlbaum
Alderson, J.C., & Hamp-Lyons, L. (1996). TOEFL preparation courses: A study of
washback. Language Testing, 13, 280-297.
Asmah Hj Omar (1992). The linguistic scenery in Malaysia. Kuala Lumpur:
Dewan Bahasa dan Pustaka.
Brown, J.D.(2002). Extraneous variables and the washback effect. JALT Testing
& Evaluation. Vol.6: No.2, 12-15.
Cheng, L. & Curtis, A. (2004). Washback or backwash: A review of the impact of
testing on teaching and learning. In Cheng, L. Watanabe, Y. & Curtis, A. (Eds.)
Washback in language testing. Mahwah, N. Jersey: Lawrence Erlbaum
Cheng, L., Watanabe, Y. & Curtis, A. (2004) (Eds.) Preface. In Washback in
language testing. Mahwah, N. Jersey: Lawrence Erlbaum Associates, Inc.
Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge
Malaysian Examinations Council. (2001). Malaysian University English Test.
Kuala Lumpur: Malaysian Examinations Council.
Messick, S. (1996). Validity and washback in language testing. Language
Testing, 13, 241-256.
Nambiar, M. (2005). Testing oral interaction: The Malaysian Experience.
Australian Language & Literacy Matters. Vol. 2: Nos. 3 & 4, 31-34.
Pearson, I. (1988). Tests as levers of change. In Chamberlain, D. &
Baumgardner, R.J. (Eds.), ESP in the classroom: Practice and evaluation
(pp.98-107). London: Modern English.
Perumal, J. (2009). Washback effect of the EST examination on teaching.
Unpublished research report for the Master in ESL. Kuala Lumpur: University of
Shohamy,E., Donitsa-Schmidt, S. & Ferman, I. (1996). Test impact revisited:
Washback effect over time. Language Testing, 13, 298-317.
Vasugi, M. (2010). A test of integrity. The Star 13 June. p. E7
Wall,D. (1997).Impact and washback in language testing. In Clapham,C. &
Corson, D.(Eds.), Encyclopedia of language and education: Vol.7. Language
testing and assessment (pp. 291-302). Dordrecht: Kluwer Academic.
Watanabe,Y. (1996). Does grammar translation come from the entrance
examination? Preliminary findings from classroom-based research. Language
Testing, 13, 318-333.