"Coach as Facilitator " Please respond to the following discussion questions:
1. Read the transcript of, “Agile Communication Tools and Techniques” below. Suggest at least two (2) strategies that an agile facilitator can use to coach his / her team during standard agile meetings. Suggest two (2) actions that an agile facilitator should exhibit and two (2) actions that an agile facilitator should not exhibit during the meetings. Provide a rationale for your response.
Communication plays a key part on an Agile project. There are a couple different elements of communication. One of them is written communication, and we saw there that we want to try to be minimalistic in terms of our written communication, but we're going to talk in a second about some of the ways that we can communicate using written forms on an Agile project. But given that we know that we're going to be a little more minimalistic in terms of our written communication, it requires us to be a lot more proactive in terms of our verbal communication. And it's one of the things about Agile that makes it relatively successful is that you're doing a lot more communication, and a lot of the communication that you might have been doing on paper, now you're going to be doing face-to-face, you're going to be doing over the phone. So, there's a heavy reliance in Agile on communication. This is good because a lot of the work of a project requires you to do a lot of communication.
So, for instance, on a normal project, let's say a normal non-Agile project, we might do a weekly status review or status update meeting--very common. On some projects maybe every other week, but let's just say it was a weekly meeting. Well, the thing with an Agile project is, is that is not nearly going to be enough. So with Agile we typically have daily meetings. We have daily meetings with the project team, and if we need our customer involved or our product owners, they're going to also be at the meeting. When we have questions for our product owner or questions for our customer we're going to have to be able to give those questions immediately to the product owner, and we're going to rely on them to give us feedback in a fairly expedited manner. We're not going to be able to have an Agile project that's going to have a two-week iteration if we have to ask a question of our product owner and it takes them a week and a half to get back to us like on a traditional project. So the whole model around short iterative development requires us to have very crisp and very quick communications. So it's actually one of the strengths of an Agile project to be able to communicate effectively.
Now, on the written side, there are some neat areas of communication there as well. One of the things that a lot of Agile projects do is they create an information radiator. And what the radiator means is that we have one place where we have most all of the basic information that's going on with the Agile project. This could just be a ...
Coach as Facilitator Please respond to the following discussion.docx
1. "Coach as Facilitator " Please respond to the following
discussion questions:
1. Read the transcript of, “Agile Communication Tools and
Techniques” below. Suggest at least two (2) strategies that an
agile facilitator can use to coach his / her team during standard
agile meetings. Suggest two (2) actions that an agile facilitator
should exhibit and two (2) actions that an agile facilitator
should not exhibit during the meetings. Provide a rationale for
your response.
Communication plays a key part on an Agile project. There are
a couple different elements of communication. One of them is
written communication, and we saw there that we want to try to
be minimalistic in terms of our written communication, but
we're going to talk in a second about some of the ways that we
can communicate using written forms on an Agile project. But
given that we know that we're going to be a little more
minimalistic in terms of our written communication, it requires
us to be a lot more proactive in terms of our verbal
communication. And it's one of the things about Agile that
makes it relatively successful is that you're doing a lot more
communication, and a lot of the communication that you might
have been doing on paper, now you're going to be doing face-to-
face, you're going to be doing over the phone. So, there's a
heavy reliance in Agile on communication. This is good because
a lot of the work of a project requires you to do a lot of
communication.
So, for instance, on a normal project, let's say a normal non-
Agile project, we might do a weekly status review or status
update meeting--very common. On some projects maybe every
other week, but let's just say it was a weekly meeting. Well, the
thing with an Agile project is, is that is not nearly going to be
enough. So with Agile we typically have daily meetings. We
have daily meetings with the project team, and if we need our
customer involved or our product owners, they're going to also
2. be at the meeting. When we have questions for our product
owner or questions for our customer we're going to have to be
able to give those questions immediately to the product owner,
and we're going to rely on them to give us feedback in a fairly
expedited manner. We're not going to be able to have an Agile
project that's going to have a two-week iteration if we have to
ask a question of our product owner and it takes them a week
and a half to get back to us like on a traditional project. So the
whole model around short iterative development requires us to
have very crisp and very quick communications. So it's actually
one of the strengths of an Agile project to be able to
communicate effectively.
Now, on the written side, there are some neat areas of
communication there as well. One of the things that a lot of
Agile projects do is they create an information radiator. And
what the radiator means is that we have one place where we
have most all of the basic information that's going on with the
Agile project. This could just be a whiteboard, or it could be a
set of flip charts, and on that flip chart we might have some
kind of a burndown chart that tells us how many of the use
cases that we've completed in this iteration and all of the prior
iterations and how many are left, we may have a sense for the
errors we're getting--they may actually be written up on the
board. We may have overall status in terms of the project as of
today could be written on that information radiator and other
things that are interesting and that are going on at any given
time. So if you're part of the Agile team, you don't always have
to rely on going around and talking with everybody to see
what's going on. You have the daily status meeting or the daily
Scrum meeting, that they call it in the Scrum methodology, or
you have some kind of information radiator that basically is a
snapshot of what's going on with the project at any given time.
Another thing that most Agile teams have is some kind of team
space which is also fairly important. If the team is co-located,
that is if they tend to all be in the same area, you can actually
have a physical team space that you can use--actually for people
3. to work in if not to meet in, they could actually be working in
there--but if not at least you've got a place that you can always
count on to meet. You can meet informally in there, or formally,
at any given time. You can put information up on the walls; you
can put flow charts up on the walls. Basically, it's a whole room
dedicated to communication, being able to walk in and see
what's going on with a project at any given time. Now, if your
team is virtual, what we're going to try to do is create a virtual
team room, with some kind of a technology, maybe a team room
software package that we can use so that at any given time, we
can go to a certain area and see what's going on in terms of
status, in terms of progress, in terms of the end of the iteration--
you know, problems that we're facing, etc.
One of the other things that the team room does for us, if we're
able to be co-located--this is not something that you can take
advantage of with a virtual team--but if we have a team room
where people can actually visit and actually can meet together,
then we have something that we can take advantage of called
osmotic communication. And what osmotic communication is is
it's communication that you hear on the periphery. In other
words, you may be talking to somebody on the Agile team in the
team room, and somebody else might come in and have a
discussion on some Agile requirement or maybe some kind of
problem that they're facing, and even though you're not part of
that particular problem or that particular requirement, you can't
help but hear because you're co-located in that team room, and
there may be something that will come up that'll just catch your
ear, and you'll be able to contribute something to that other
area, that other problem, or that other requirement. That ability
to do that is called osmotic communication. Now you can't do
that if you're in a team room and you have headphones on all
day listening to music, but assuming that you have a place that
you're all located together, that's another thing that we can take
advantage of on an Agile project.
2. Recommend two (2) ways that one can use powerful
observations, powerful questions, and powerful challenges to
4. help a team’s communication. Include two (2) examples of the
recommended actions to support your response.
RESEARCH REPORT SERIES
(Survey Methodology #2006-13)
Survey Questionnaire Construction
Elizabeth Martin
Director’s Office
U.S. Census Bureau
Washington, D.C. 20233
Report Issued: December 21, 2006
D isclaim er: T his report is released to inform interested parties
of research and to encourage discussion. T he views
expressed are those of the author and not necessarily those of
the U .S. Census B ureau.
1
Survey Questionnaire Construction
Elizabeth Martin
U. S. Census Bureau, Washington D.C.
5. Glossary
closed question A survey question that offers
response categories.
context effects The effects that prior questions
have on subsequent responses.
open question A survey question that does not
offer response categories.
recency effect Overreporting events in the most
recent portion of a reference period, or a
tendency to select the last-presented response
alternative in a list.
reference period The period of time for which a
respondent is asked to report.
response effects The effects of variations in
question wording, order, instructions, format,
etc. on responses.
retention interval The time between an event to
be remembered and a recall attempt.
screening questions Questions designed to
identify specific conditions or events.
split-sample An experimental method in which a
sample is divided into random subsamples and
a different version of a questionnaire is
assigned to each.
standardized questionnaire The wording and
order of questions and response choices are
6. scripted in advance and administered as
worded by interviewers.
Questionnaires are used in sample surveys or
censuses to elicit reports of facts, attitudes, and
other subjective states. Questionnaires may be
administered by interviewers in person or by
telephone, or they may be self-administered on
paper or another medium, such as audio-cassette or
the internet. Respondents may be asked to report
about themselves, others in their household, or
other entities, such as businesses. This article
focuses on construction of standardized survey
questionnaires.
The utility of asking the same questions across
a broad group of people in order to obtain
comparable information from them has been
appreciated at least since 1086, when William the
Conqueror surveyed the wealth and landholdings of
England using a standard set of inquiries and
compiled the results in the “Domesday Book.”
Sophistication about survey techniques has
increased vastly since then, but fundamental
insights about questionnaires advanced less during
the millennium than might have been hoped. For
the most part, questionnaire construction has
remained more an art than a science. In recent
decades there have been infusions of theory from
relevant disciplines (such as cognitive psychology
and linguistic pragmatics), testing and evaluation
techniques have grown more comprehensive and
informative, and knowledge about questionnaire
design effects and their causes has cumulated.
These developments are beginning to transform
survey questionnaire construction from an art to a
7. science.
2
Theoretical Perspectives on Asking
and Answering Questions
Three theoretical perspectives point toward
different issues that must be considered in
constructing a questionnaire.
The M odel of the Standardized Survey
Interview
From this perspective, the questionnaire consists of
standardized questions that operationalize the
measurement constructs. The goal is to present a
uniform stimulus to respondents so that their
responses are comparable. Research showing that
small changes in question wording or order can
substantially affect responses has reinforced the
assumption that questions must be asked exactly as
worded, and in the same order, to produce
comparable data.
Question Answering as a Sequence of
Cognitive Tasks
A second theoretical perspective was stimulated by
efforts to apply cognitive psychology to understand
and perhaps solve recall and reporting errors in
surveys of health and crime. A respondent must
8. perform a series of cognitive tasks in order to
answer a survey question. He or she must
comprehend and interpret the question, retrieve
relevant information from memory, integrate the
information, and respond in the terms of the
question. At each stage, errors may be introduced.
Dividing the response process into components has
provided a framework for exploring response
effects, and has led to new strategies for
questioning. However, there has been little
research demonstrating that respondents actually
engage in the hypothesized sequence of cognitive
operations when they answer questions, and the
problems of retrieval that stimulated the
application of cognitive psychology to survey
methodology remain nearly as difficult as ever.
The Interview as Conversation
Respondents do not necessarily respond to the
literal meaning of a question, but rather to what
they infer to be its intended meaning. A survey
questionnaire serves as a script performed as part
of an interaction between respondent and
interviewer. The interaction affects how the script
is enacted and interpreted. Thus, the construction
of meaning is a social process, and is not carried by
question wording alone. Participants in a
conversation assume it has a purpose, and rely
upon implicit rules in a cooperative effort to
understand and achieve it. They take common
knowledge for granted and assume that each
participant will make his contribution relevant and
as informative as required, but no more informative
than necessary. (These conversational maxims
9. were developed by Paul Grice, a philosopher). The
resulting implications for the interview process are:
1. Asking a question communicates that a
respondent should be able to answer it.
2. Respondents interpret questions to make
them relevant to the perceived intent.
3. Respondents interpret questions in ways
that are relevant to their own situations.
4. Respondents answer the question they think
an interviewer intended to ask.
5. Respondents do not report what they believe
an interviewer already knows.
6. Respondents avoid providing redundant
information.
7. If response categories are provided, at least
one is true.
These implications help us understand a number of
well-established questionnaire phenomena.
Consistent with item 1, many people will answer
survey questions about unfamiliar objects using the
question wording and context to construct a
plausible meaning. As implied by items 2 and 3,
interpretations of questions vary greatly among
respondents. Consistent with item 4, postinterview
studies show that respondents do not believe the
interviewer “really” wants to know everything that
might be reported, even when a question asks for
complete reports. Consistent with items 5 and 6,
respondents reinterpret questions to avoid
10. redundancy. As implied by item 7, respondents
are unlikely to volunteer a response that is not
offered in a closed question.
The conversational perspective has been the
source of an important critique of standardization,
which is seen as interfering with the conversational
3
resources that participants would ordinarily
employ to reach a common understanding, and it
has led some researchers to advocate flexible
rather than standardized questioning. A
conversational perspective naturally leads to a
consideration of the influences that one question
may have on interpretations of subsequent ones,
and also the influence of the interview
context–what respondents are told and what they
infer about the purposes for asking the
questions–on their interpretations and responses.
Constructing Questionnaires
Constructing a questionnaire involves many
decisions about the wording and ordering of
questions, selection and wording of response
categories, formatting and mode of administration
of the questionnaire, and introducing and
explaining the survey. Although designing a
questionnaire remains an art, there is increasing
knowledge available to inform these decisions.
Question Wording
11. Although respondents often seem to pay scant
attention to survey questions or instructions, they
are often exquisitely sensitive to subtle changes in
words and syntax. Question wording effects speak
to the power and complexity of language
processing, even when respondents are only half
paying attention.
A famous experiment illustrates the powerful
effect that changing just one word can have in rare
cases. In a national sample, respondents were
randomly assigned to be asked one of two
questions:
1. “Do you think the United States should
allow public speeches against democracy?”
2. “Do you think the United States should
forbid public speeches against democracy?”
Support for free speech is greater–by more than 20
percentage points–if respondents answer question
2 rather than question 1. That is, more people
answer “no” to question 2 than answer “yes” to
question 1; “not allowing” speeches is not the same
as “forbidding” them, even though it might seem to
be the same. The effect was first found by Rugg in
1941 and later replicated by Schuman and Presser
in the United States and by Schwarz in Germany in
the decades since, so it replicates in two languages
and has endured over 50 years–even as support for
freedom of speech has increased, according to both
versions.
12. Terminology
“Avoid ambiguity” is a truism of questionnaire
design. However, language is inherently
ambiguous, and seemingly simple words may have
multiple meanings.. Research by Belson and others
demonstrates that ordinary words and phrases, such
as “you,” “children,” and “work,”are interpreted
very differently by different respondents.
Complexity and Ambiguity
Both cognitive and linguistic factors may impede
respondents’ ability to understand a question at all,
as well as give rise to variable or erroneous
interpretations. Questionnaire designers often
intend a survey question to be interpreted literally.
For example:
“During the past 12 months, since January 1, 1987,
how many times have you seen or talked with a
doctor or assistant about your health? Do not count
any times you might have seen a doctor while you
were a patient in a hospital, but count all other
times you actually saw or talked to a medical
doctor of any kind about your health.”
Such questions challenge respondents who must
parse the question, interpret its key referents (i.e.,
“doctor or assistant,” “medical doctor of any
kind”), infer the events to be included (visits to
discuss respondent’s health in person or by
telephone during the past 12 months) and excluded
(visits while in a hospital), and keep in mind all
these elements while formulating an answer. Apart
from a formidable task of recall, parsing such a
complex question may overwhelm available mental
resources so that a respondent does not understand
13. the question fully or at all. Processing demands are
increased by embedded clauses or sentences (e.g.,
“while you were a patient in a hospital”) and by
syntactic ambiguity. An example of syntactic
ambiguity appears in an instruction on a U. S.
census questionnaire to include “People living here
4
most of the time while working, even if they have
another place to live.” The scope of the quantifier
“most” is ambiguous and consistent with two
possible interpretations, (i) “...[most of the
time][while working]...” and (ii) ”... [most of the
[time while working]]....”
Ambiguity also can arise from contradictory
grammatical and semantic elements. For example,
it is unclear whether the following question asks
respondents to report just one race: “I am going to
read you a list of race categories. Please choose
one or more categories that best indicate your
race.” “One or more” is contradicted by the
singular reference to “race” and by “best indicate,”
which is interpretable as a request to select one.
Cognitive overload due to complexity or
ambiguity may result in portions of a question
being lost, leading to partial or variable
interpretations and misinterpretations. Although
the negative effects of excessive burden on
working memory are generally acknowledged, the
practical limits for survey questions have not been
determined, nor is there much research on the
14. linguistic determinants of survey question
comprehension.
Presupposition
A presupposition is true regardless of whether the
statement itself is true or false–that is, it is constant
under negation. (For example, the sentences “I am
proud of my career as a survey methodologist” and
“I am not proud of my career as a survey
methodologist” both presuppose I have a career as
a survey methodologist.) A question generally
shares the presuppositions of its assertions. “What
are your usual hours of work?” presupposes that a
respondent works, and that his hours of work are
regular. Answering a question implies accepting
its presuppositions, and a respondent may be led to
provide an answer even if its presuppositions are
false. Consider an experiment by Loftus in which
subjects who viewed accident films were asked
“Did you see a broken headlight?” or “Did you see
the broken headlight?” Use of the definite article
triggers the presupposition that there was a broken
headlight, and people asked the latter question
were more likely to say “yes,” irrespective of
whether the film showed a broken headlight.
As described by Levinson, linguists have
isolated a number of words and sentence
constructions that trigger presuppositions, such as
change of state verbs (e.g., “Have you stopped
attending church?”), and factive verbs (e.g.,
“regret,” “realize,” and “know”). (For example, “If
you knew that the AMA is opposed to Measure H,
would you change your opinion from for Measure
H to against it?” presupposes the AMA is opposed
15. to Measure H.) Forced choice questions, such as
“Are you a Republican or a Democrat?”
presuppose that one of the alternatives is true.
Fortunately for questionnaire designers,
presuppositions may be cancelled. “What are your
usual hours of work?” might be reworded to ask,
“What are your usual hours of work, or do you not
have usual hours?” Filter questions [e.g., “Do you
work?” and (if yes) “Do you work regular hours?”]
can be used to test and thereby avoid unwarranted
presuppositions.
Question Context and Order
Question order changes the context in which a
particular question is asked. Prior questions can
influence answers to subsequent questions through
several mechanisms. First, the semantic content of
a question can influence interpretations of
subsequent questions, especially when the
subsequent questions are ambiguous. For example,
an obscure “monetary control bill” was more likely
to be supported when a question about it appeared
after questions on inflation, which presumably led
respondents to infer the bill was an anti-inflation
measure.
Second, the thoughts or feelings brought to
mind while answering a question may influence
answers to subsequent ones. This is especially
likely when an answer to a question creates
expectations for how a subsequent one should be
answered. A famous experiment manipulated the
order of a pair of questions:
16. “Do you think the United States should let
Communist newspaper reporters from other
countries come in here and send back to their
papers the news as they see it?”
“Do you think a Communist country like
Russia should let American newspaper reporters
come in and send back to America the news as they
see it?”
Respondents were much more likely to think
5
Communist reporters should be allowed in the
United States if they answered that question
second. Respondents apparently answered
whichever question was asked first in terms of pro-
American or anti-Communist sentiments. The
second question activated a norm of reciprocity.
Since many respondents felt constrained to treat
reporters from both countries equally, they gave an
answer to the second question that was consistent
with the first.
Third, following conversational maxims,
respondents may interpret questions so they are not
redundant with prior questions. When a specific
question precedes a general question, respondents
“subtract” their answer to the specific question
from their answer to the general one, under certain
circumstances. Respondents asked questions about
marital satisfaction and general life satisfaction
reinterpret the general question to exclude the
17. specific one: “Aside from your marriage, which
you already told us about, how satisfied are you
with other aspects of your life?”
This type of context effect, called a part–whole
effect by Schuman and Presser, can occur for
factual as well as attitudinal questions. For
example, race and Hispanic origin items on the U.
S. census form are perceived as redundant by many
respondents, although they are officially defined as
different. When race (the more general item)
appears first, many Hispanic respondents fail to
find a race category with which they identify, so
they check “other” and write in “Hispanic.” When
Hispanic origin is placed first so that such
respondents first have a chance to report their
Hispanic identity, they are less likely to report their
Hispanic origin in the race item. Thus, when the
specific item comes first, many respondents
reinterpret race to exclude the category Hispanic.
In this case, manipulating the context leads to
reporting that is more consistent with measurement
objectives.
One might wonder why a prior question about
marital satisfaction would lead respondents to
exclude, rather than include, their feelings about
their marriages in their answers to a general life
satisfaction question. Accounts of when
information primed by a prior question will be
subtracted rather than assimilated into later
answers or interpretations have been offered by
Schwarz and colleagues and by Tourangeau et al.
The argument is that when people are asked to
form a judgment they must retrieve some cognitive
18. representation of the target stimulus, and also must
determine a standard of comparison to evaluate it.
Some of what they call to mind is influenced by
preceding questions and answers, and this
temporarily accessible information may lead to
context effects. It may be added to (or subtracted
from) the representation of the target stimulus. The
questionnaire format and the content of prior
questions may provide cues or instructions that
favor inclusion or exclusion. For example,
Schwarz and colleagues induced either an
assimilation or a contrast effect in German
respondents’ evaluations of the Christian
Democratic party by manipulating a prior
knowledge question about a highly respected
member (X) of the party. By asking “Do you
happen to know which party X has been a member
of for more than twenty years?” respondents were
led to add their feelings about X to their evaluation
of the party in a subsequent question, resulting in
an assimilation effect. Asking “Do you happen to
know which office X holds, setting him aside from
party politics?” led them to exclude X from their
evaluation of the party, resulting in a contrast
effect.
Alternatively, the information brought to mind
may influence the standard of comparison used to
judge the target stimulus and result in more general
context effects on a set of items, not just the target.
For example, including Mother Teresa in a list of
public figures whose moral qualities were to be
evaluated probably would lower the ratings for
everyone else on the list. Respondents anchor a
scale to accommodate the range of stimuli
19. presented to them, and an extreme (and relevant)
example in effect shifts the meaning of the scale.
This argues for explicitly anchoring the scale to
incorporate the full range of values, to reduce such
contextual influences.
Response Categories and Scales
The choice and design of response categories are
among the most critical decisions about a
questionnaire. As noted, a question that offers a
choice among alternatives presupposes that one of
6
them is true. This means that respondents are
unlikely to volunteer a response option that is not
offered, even if it might seem an obvious choice.
Open versus Closed Questions
An experiment by Schuman and Presser compared
open and closed versions of the question, “What do
you think is the most important problem facing this
country at present?” The closed alternatives were
developed using responses to the open-ended
version from an earlier survey. Just as the survey
went in the field, a prolonged cold spell raised
public fears of energy shortage. The open version
registered the event: “food and energy shortages”
responses were given as the most important
problem by one in five respondents. The closed
question did not register the energy crisis because
the category was not offered in the closed question,
and only one respondent volunteered it.
20. This example illustrates an advantage of open
questions, their ability to capture answers
unanticipated by questionnaire designers. They
can provide detailed responses in respondents’ own
words, which may be a rich source of data. They
avoid tipping off respondents as to what response
is normative, so they may obtain more complete
reports of socially undesirable behaviors. On the
other hand, responses to open questions are often
too vague or general to meet question objectives.
Closed questions are easier to code and analyze
and compare across surveys.
Types of Closed-Response Formats
The previous example illustrates that response
alternatives must be meaningful and capture the
intended range of responses. When respondents
are asked to select only one response, response
alternatives must also be mutually exclusive.
The following are common response formats:
Agree–disagree: Many survey questions do
not specify response alternatives but invite a “yes”
or “no” response. Often, respondents are offered
an assertion to which they are asked to respond: for
example, “Do you agree or disagree?–Money is the
most important thing in life.” Possibly because
they state only one side of an issue, such items
encourage acquiescence, or a tendency to agree
regardless of content, especially among less
educated respondents.
Forced choice: In order to avoid the effects of
acquiescence, some methodologists advocate
21. explicitly mentioning the alternative responses. In
a stronger form, this involves also providing
substantive counterarguments for an opposing
view:
“If there is a serious fuel shortage this winter,
do you think there should be a law requiring people
to lower the heat in their homes, or do you oppose
such a law?”
“If there is a serious fuel shortage this winter,
do you think there should be a law requiring people
to lower the heat in their homes, or do you oppose
such a law because it would be too difficult to
enforce?”
Formal balance, as in the first question, does
not appear to affect response distributions, but
providing counterarguments does consistently
move responses in the direction of the
counterarguments, according to Schuman and
Presser’s experiments. Devising response options
with counterarguments may not be feasible if there
are many plausible reasons for opposition, since
the counterargument can usually only capture one.
Ordered response categories or scales:
Respondents may be asked to report in terms of
absolute frequencies (e.g., “Up to ½ hour, ½ to 1
hour, 1 to 1 ½ hours, 1 ½ to 2 hours, 2 to 2 ½
hours, More than 2 ½ hours”), relative frequencies
(e.g., “All of the time, most of the time, a good bit
of the time, some of the time, a little bit of the time,
none of the time”), evaluative ratings (e.g.,
“Excellent, pretty good, only fair, or poor”), and
numerical scales (e.g., “1 to 10" and “-5 to +5").
22. Response scales provide a frame of reference
that may be used by respondents to infer a
normative response. For example, Schwarz and
colleagues compared the absolute frequencies scale
presented in the previous paragraph with another
that ranged from “Up to 2 ½ hour” to “More than
4 ½ hours” in a question asking how many hours a
day the respondent watched television. The higher
scale led to much higher frequency reports,
presumably because many respondents were
influenced by what they perceived to be the
normative or average (middle) response in the
scale. If there is a strong normative expectation, an
open-ended question may avoid this source of bias.
Frequently, ordered categories are intended to
measure where a respondent belongs on an
7
underlying dimension (scale points may be further
assumed to be equidistant). Careful grouping and
labeling of categories is required to ensure they
discriminate. Statistical tools are available to
evaluate how well response categories perform.
For example, an analysis by Reeve and Mâsse (see
Presser et al.) applied item response theory to
show that “a good bit of the time” in the relative
frequencies scale presented previously was not
discriminating or informative in a mental health
scale.
Rating scales are more reliable when all points
are labeled and when a branching structure is used,
23. with an initial question (e.g., “Do you agree or
disagree...”) followed up by a question inviting
finer distinctions (“Do you strongly agree/disagree,
or somewhat agree/disagree?”), according to
research by Krosnick and colleagues and others.
The recommended number of categories in a scale
is 7, plus or minus 2. Numbers assigned to scale
points may influence responses, apart from the
verbal labels. Response order may influence
responses, although the basis for primacy effects
(i.e., selecting the first category) or recency effects
(i.e., selecting the last category) is not fully
understood. Primacy effects are more likely with
response options presented visually (in a self-
administered questionnaire or by use of a show
card) and recency effects with aural presentation
(as in telephone surveys).
Offering an Explicit “Don’t Know” Response
Option
Should “don’t know” be offered as an explicit
response option? On the one hand, this has been
advocated as a way of filtering out respondents
who do not have an opinion and whose responses
might therefore be meaningless. On the other
hand, it increases the number of respondents who
say “don’t know,” resulting in loss of data.
Schuman and Presser find that the relative
proportions choosing the substantive categories are
unaffected by the presence of a “don’t know”
category, and research by Krosnick and others
suggests that offering “don’t know” does not
improve data quality or reliability. Apparently,
many respondents who take the easy out by saying
“don’t know” when given the opportunity are
24. capable of providing meaningful and valid
responses. Thus, “don’t know” responses are best
discouraged.
Communicating Response Categories and the
Response Task
Visual aids, such as show cards, are useful for
communicating response categories to respondents
in personal interviews. In self-administered
questionnaires, the categories are printed on the
questionnaire. In either mode, the respondent does
not have to remember the categories while
formulating a response, but can refer to a printed
list. Telephone interviews, on the other hand,
place more serious constraints on the number of
response categories; an overload on working
memory probably contributes to the recency effects
that can result from auditory presentation of
response options. Redesigning questions to
branch, so that each part involves a smaller number
of options, reduces the difficulty. Different
formats for presenting response alternatives in
different modes may cause mode biases; on the
other hand, the identical question may result in
different response biases (e.g., recency or primacy
effects) in different modes. Research is needed on
this issue, especially as it affects mixed mode
surveys.
The same general point applies to
communicating the response task. For example, in
developmental work conducted for implementation
of a new census race question that allowed reports
of more than one race, it proved difficult to get
respondents to notice the “one or more” option.
25. One design solution was to introduce redundancy,
so respondents had more than one chance to absorb
it.
Addressing Problems of Recall and
Retrieval
Psychological theory and evidence support several
core principles about memory that are relevant to
survey questionnaire construction:
1. Autobiographical memory is reconstructive
and associative.
2. Autobiographical memory is organized
hierarchically. (Studies of free recall suggest the
organization is chronological, with memories for
specific events embedded in higher order event
sequences or periods of life.)
8
3. Events that were never encoded (i.e.,
noticed, comprehended, and stored in memory)
cannot be recalled.
4. Cues that reinstate the context in which an
event was encoded aid memory retrieval.
5. Retrieval is effortful and takes time.
6. Forgetting increases with the passage of
time due to decay of memory traces and to
26. interference from new, similar events.
7. The characteristics of events influence their
memorability: salient, consequential events are
more likely to be recalled than inconsequential or
trivial ones.
8. Over time, memories become less
idiosyncratic and detailed, and more schematic and
less distinguishable from memories for other
similar events.
9. The date an event occurred is usually one of
its least accurately recalled features.
Principle 6 is consistent with evidence of an
increase in failure to report events, such as
hospitalizations or consumer purchases, as the time
between the event and the interview–the retention
interval–increases. Hospitalizations of short
duration are more likely to be forgotten than those
of long duration, illustrating principle 7. A second
cause of error is telescoping. A respondent who
recalls that an event occurred may not recall when.
On balance, events tend to be recalled as
happening more recently than they actually
did–that is, there is forward telescoping, or events
are brought forward in time. Forward telescoping
is more common for serious or consequential
events (e.g., major purchases and crimes that were
reported to police). Backward telescoping, or
recalling events as having happened longer ago
than they did, also occurs. The aggregate effect of
telescoping and forgetting is a pronounced recency
bias, or piling up of reported events in the most
recent portion of a reference period. Figure 1
27. illustrates the effect for two surveys.
The rate for the month prior to the interview is
taken as a base and the rates for other months are
calculated relative to it. Line 3 shows that
monthly victimization rates decline monotonically
each month of a 6-month reference period. Lines
1 and 2 show the same for household repairs over
a 3-month reference period; note the steeper
decline for minor repairs.
Figure 1 Recency bias for two surveys. Sources: Neter, J.
and Waxberg, J. (1964) “A Study of Response Errors in
Expenditures Data from Household Interviews.” J. Am. Stat.
Assoc. 59:18-55; Biderman, A. D. and Lynch, J. P. (1981)
“Recency Bias in Data on Self-Reported Victimization” Proc.
Social Stat. Section (Am. Stat. Assoc.): 31-40.
Recent theories explain telescoping in terms of an
increase in uncertainty about the timing of older
events. Uncertainty only partially explains
telescoping, however, since it predicts more
telescoping of minor events than of major ones, but
in fact the opposite occurs.
Because of the serious distortions introduced
by failure to recall and by telescoping, survey
methodologists are generally wary of “Have you
ever...?”-type questions that ask respondents to
recall experiences over a lifetime. Instead, they
have developed various questioning strategies to
try to improve respondents’ recall.
Strategies to Improve Temporal Accuracy
In order to improve recall accuracy, questions are
28. usually framed to ask respondents to recall events
that occurred during a reference period of definite
duration. Another procedure is to bound an
interview with a prior interview, in order to prevent
respondents from telescoping in events that
happened before the reference period. Results of
the bounding interview are not included in survey
estimates. Another method attempts to make the
boundary of the reference period more vivid by
associating it with personal or historical landmark
events. This can reduce telescoping, especially if
the landmark is relevant to the types of events a
9
respondent is asked to recall. A more elaborate
procedure, the event history calendar, attempts to
structure flexible questions in a way that reflects
the organization of memory and has proved
promising in research by Belli and associates.
For many survey questions, respondents may
rely on a combination of memory and judgment to
come up with answers. When the number of
events exceeds 10, very few respondents actually
attempt to recall and enumerate each one. Instead,
they employ other strategies, such as recalling a
few events and extrapolating a rate over the
reference period, retrieving information about a
benchmark or standard rate and adjusting upward
or downward, or guessing. By shortening the
reference period, giving respondents more time, or
decomposing a question into more specific
questions, questionnaire designers can encourage
29. respondents to enumerate episodes if that is the
goal.
Aided and Unaided Recall
In general, unaided (or free) recall produces less
complete reporting than aided recall. It may also
produce fewer erroneous reports. Cues and
reminders serve to define the scope of eligible
events and stimulate recall of relevant instances.
A cuing approach was employed to improve
victimization reporting in a 1980s redesign of the
U. S. crime victimization survey. Redesigned
screening questions were structured around
multiple frames of reference (acts, locales,
activities, weapons, and things stolen), and
included numerous cues to stimulate recall,
including recall for underreported, sensitive, and
nonstereotypical crimes. The result was much
higher rates of reporting.
Although cuing improves recall, it can also
introduce error, because it leads to an increase in
reporting of ineligible incidents as well as eligible
ones. In addition, the specific cues can influence
the kinds of events that are reported. The crime
survey redesign again is illustrative. Several crime
screener formats were tested experimentally. The
cues in different screeners emphasized different
domains of experience, with one including more
reminders of street crimes and another placing
more emphasis on activities around the home.
Although the screeners produced the same overall
rates of victimization, there were large differences
in the characteristics of crime incidents reported.
More street crimes and many more incidents
30. involving strangers as offenders were elicited by
the first screener.
Dramatic cuing effects such as this may result
from the effects of two kinds of retrieval
interference. Part-set cuing occurs when specific
cues interfere with recall of noncued items in the
same category. For example, giving “knife” as a
weapons cue would make respondents less likely to
think of “poison” or “bomb” and (by inference)
less likely to recall incidents in which these
noncued items were used as weapons. The effect
would be doubly biasing if (as is true in
experimental studies of learning) retrieval in
surveys is enhanced for cued items and depressed
for noncued items.
A second type of interference is a retrieval
block that occurs when cues remind respondents of
details of events already mentioned rather than
triggering recall of new events. Recalling one
incident may block retrieval of others, because a
respondent in effect keeps recalling the same
incident. Retrieval blocks imply underreporting of
multiple incidents. Early cues influence which
event is recalled first, and once an event is recalled,
it inhibits recall for additional events. Therefore,
screen questions or cues asked first may unduly
influence the character of events reported in a
survey.
Another illustration of cuing or example effects
comes from the ancestry question in the U.S.
census. "English" appeared first in the list of
examples following the ancestry question in 1980,
but was dropped in 1990. There was a
31. corresponding decrease from 1980 to 1990 of about
17 million persons reporting English ancestry.
There were also large increases in the numbers
reporting German, Acadian/Cajun, or French-
Canadian ancestry, apparently due to the listing of
these ancestries as examples in 1990 but not 1980,
or their greater prominence in the 1990 list. These
effects of examples, and their order, may occur
because respondents write in the first ancestry
listed that applies to them. In a related question,
examples did not have the same effect. Providing
examples in the Hispanic origin item increased
reporting of specific Hispanic origin groups, both
of example groups and of groups not listed as
examples, apparently because examples helped
10
communicate the intent of the question.
Tools for Pretesting and Evaluating
Questions
It has always been considered good survey practice
to pretest survey questions to ensure they can be
administered by interviewers and understood and
answered by respondents. Historically, such
pretests involved interviewers completing a small
number of interviews and being debriefed.
Problems were identified based on interview
results, such as a large number of “don’t know”
responses, or on interviewers’ reports of their own
or respondents’ difficulties with the questions.
This type of pretest is still valuable, and likely to
32. turn up unanticipated problems. (For automated
instruments, it is essential also to test the
instrument programming.) However, survey
researchers have come to appreciate that many
questionnaire problems are likely to go undetected
in a conventional pretest, and in recent decades the
number and sophistication of pretesting methods
have expanded. The new methods have led to
greater awareness that survey questions are neither
asked nor understood in a uniform way, and
revisions based on pretest results appear to lead to
improvements. However, questions remain about
the validity and reliability of the methods and also
the relationship between the problems they identify
and measurement errors in surveys. Because the
methods appear better able to identify problems
than solutions, an iterative approach involving
pretesting, revision, and further pretesting is
advisable. (A largely unmet need concerns
pretesting of translated questionnaires. For cross-
national surveys, and increasingly for intranational
ones, it is critical to establish that a questionnaire
works and produces comparable responses in
multiple languages.)
Expert Appraisal and Review
Review of a questionnaire by experts in
questionnaire design, cognitive psychology, and/or
the relevant subject matter is relatively cost-
effective and productive, in terms of problems
identified. Nonexpert coders may also conduct a
systematic review using the questionnaire appraisal
scheme devised by Lessler and Forsyth (see
Schwarz and Sudman) to identify and code
33. cognitive problems of comprehension, retrieval,
judgment, and response generation. Automated
approaches advanced by Graesser and colleagues
apply computational linguistics and artificial
intelligence to build computer programs that
identify interpretive problems with survey
questions (see Schwarz and Sudman).
Think-Aloud or Cognitive Interviews
This method was introduced to survey researchers
from cognitive psychology, where it was used by
Herbert Simon and colleagues to study the
cognitive processes involved in problem-solving.
The procedure as applied in surveys is to ask
laboratory subjects to verbalize their thoughts–to
think out loud–as they answer survey questions (or,
if the task involves filling out a self-administered
questionnaire, to think aloud as they work their
way through the questionnaire). Targeted probes
also may be administered (e.g., “What period of
time are you thinking of here?”) Tapes, transcripts,
or summaries of respondents’ verbal reports are
reviewed to reveal both general strategies for
answering survey questions and difficulties with
particular questions. Cognitive interviews may be
concurrent or retrospective, depending on whether
respondents are asked to report their thoughts and
respond to probes while they answer a question, or
after an interview is concluded. Practitioners vary
considerably in how they conduct, summarize, and
analyze cognitive interviews, and the effects of
such procedural differences are being explored.
The verbal reports elicited in cognitive interviews
are veridical if they represent information available
in working memory at the time a report is
34. verbalized, if the respondent is not asked to explain
and interpret his own thought processes, and if the
social interaction between cognitive interviewer
and subject does not alter a respondent’s thought
process, according to Willis (see Presser et al.).
Cognitive interviewing has proved to be a highly
useful tool for identifying problems with questions,
although research is needed to assess the extent to
which problems it identifies translate into
11
difficulties in the field and errors in data.
Behavior Coding
This method was originally introduced by Cannell
and colleagues to evaluate interviewer
performance, but has come to be used more
frequently to pretest questionnaires. Interviews
are monitored (and usually tape recorded), and
interviewer behaviors (e.g., “Reads question
exactly as worded” and “Reads with major change
in question wording, or did not complete question
reading”) and respondent behaviors (e.g.,
“Requests clarification” and “Provides inadequate
answer”) are coded and tabulated for each
question. Questions with a rate of problem
behaviors above a threshold are regarded as
needing revision. Behavior coding is more
systematic and reveals many problems missed in
conventional pretests. The method does not
necessarily reveal the source of a problem, which
often requires additional information to diagnose.
35. Nor does it reveal problems that are not manifested
in behavior. If respondents and interviewers are
both unaware that respondents misinterpret a
question, it is unlikely to be identified by behavior
coding. Importantly, behavior coding is the only
method that permits systematic evaluation of the
assumption that interviewers administer questions
exactly as worded.
Respondent Debriefing or Special Probes
Respondents may be asked directly how they
answered or interpreted specific questions or
reacted to other aspects of the interview. Survey
participants in effect are asked to assume the role
of informant, rather than respondent. Probes to
test interpretations of terminology or question
intent are the most common form of debriefing
question, and their usefulness for detecting
misunderstandings is well documented by Belson,
Cannell, and others. For example, the following
probes were asked following the previously
discussed question about doctor visits: “We’re
interested in who people include as doctors or
assistants. When you think of a doctor or assistant,
would you include a dentist or not? Would you
include a laboratory or X-ray technician or not? ...
Did you see any of those kinds of people during the
last year?” Specific probes targeted to suspected
misunderstandings have proved more fruitful than
general probes or questions about respondents’
confidence in their answers. (Respondents tend to
be overconfident, and there is no consistent
evidence of a correlation between confidence and
accuracy.) Debriefing questions or special probes
36. have also proved useful for assessing question
sensitivity (“Were there any questions in this
interview that you felt uncomfortable
answering?”), other subjective reactions (“Did you
feel bored or impatient?”), question comprehension
(“Could you tell me in your own words what that
question means to you?”), and unreported or
misreported information (“Was there an incident
you thought of that you didn’t mention during the
interview? I don’t need details.”) Their particular
strength is to reveal misunderstandings and
misinterpretations of which both respondents and
interviewers are unaware.
Vignettes
Vignettes are brief scenarios that describe
hypothetical characters or situations. Because they
portray hypothetical situations, they offer a less
threatening way to explore sensitive subjects.
Instead of asking respondents to report directly
how they understand a word or complex concept
(“What does the term crime mean to you?”) which
has not proved to be generally productive, vignettes
pose situations which respondents are asked to
judge. For instance:
“I’ll describe several incidents that could have
happened. We would like to know for each,
whether you think it is the kind of crime we are
interested in, in this survey.... Jean and her
husband got into an argument. He slapped her hard
across the face and chipped her tooth. Do you
think we would want Jean to mention this incident
to us when we asked her about crimes that
happened to her?”
37. The results reveal how respondents interpret
the scope of survey concepts (such as crime) as
well as the factors influencing their judgments.
Research suggests that vignettes provide robust
measures of context and question wording effects
on respondents’ interpretations.
12
Split-Sample Experiments
Ultimately, the only way to evaluate the effects of
variations in question wording, context, etc. on
responses is to conduct an experiment in which
samples are randomly assigned to receive the
different versions. It is essential to ensure that all
versions are administered under comparable
conditions, and that data are coded and processed
in the same way, so that differences between
treatments can be unambiguously attributed to the
effects of questionnaire variations. Comparison of
univariate response distributions shows gross
effects, whereas analysis of subgroups reveals
conditional or interaction effects. Field
experiments can be designed factorially to evaluate
the effects of a large number of questionnaire
variables on responses, either for research purposes
or to select those that produce the best
measurements. When a survey is part of a time
series and data must be comparable from one
survey to the next, this technique can be used to
calibrate a new questionnaire to the old.
38. Conclusion
Survey questionnaire designers aim to develop
standardized questions and response options that
are understood as intended by respondents and that
produce comparable and meaningful responses. In
the past, the extent to which these goals were met
in practice was rarely assessed. In recent decades,
better tools for providing feedback on how well
survey questions perform have been introduced or
refined, including expert appraisal, cognitive
interviewing, behavior coding, respondent
debriefing, vignet te s, and split -sample
experiments. Another advance is new theoretical
perspectives that help make sense of the effects of
question wording and context. One perspective
examines the cognitive tasks in which a respondent
must engage to answer a survey question. Another
examines the pragmatics of communication in a
survey interview. Both have shed light on the
response process, although difficult problems
remain unsolved. In addition, both perspectives
suggest limits on the ability to fully achieve
standardization in surveys. New theory and
pretesting tools provide a scientific basis for
decisions about construction of survey
questionnaires.
Further Reading
Belson, W. A. (1981. The Design and
Understanding of Survey Questions. London:
Gower.
Biderman, A. D., Cantor, D., Lynch, J. P., and
39. Martin, E. (1986). Final Report of the
National Crime Survey Redesign Program.
Washington DC: Bureau of Social Science
Research.
Fowler, F. J. (1995). Improving Survey
Questions: Design and Evaluation. Thousand
Oaks CA: Sage Publications.
Levinson, S. C. (1983). Pragmatics. Cambridge:
Cambridge University Press.
Presser, S., Rothgeb, J., Couper, M., Lessler, J.,
Martin, E., Martin, J., and Singer, E. (2004).
Methods for Testing and Evaluating Survey
Questionnaires. New York: Wiley.
Schaeffer, N. C. and Presser, S. 2003. “The
science of asking questions.” Annual Review
of Sociology 29:65-88.
Schuman, H. and Presser, S. (1981). Questions
and Answers in Attitude Surveys: Experiments
on Question Form, Wording, and Context.
New York: Academic Press.
Schwarz, N. and Sudman, S. (eds.) (1996).
Answering Questions: Methodology for
Determining Cognitive and Communicative
Processes in Survey Research. San Francisco:
Jossey Bass.
Sudman, S., Bradburn, N. M., and Schwarz, N.
(1996). Thinking About Answers: The
Application of Cognitive Processes to Survey
Methodology. San Francisco: Jossey-Bass.
40. Tourangeau, R., Rips, L. J. and Rasinski, K.
(2000). The Psychology of Survey Response.
Cambridge: Cambridge University Press.
13
Page 1Page 2Page 3Page 4Page 5Page 6Page 7Page 8Page 9Page
10Page 11Page 12Page 13tssm2006-12.pdfPage 1trsm2006-
13.pdfPage 1
269
A R T I C L E
Teacher expectations of
student reading in
middle and high schools
A Chinese perspective
L I Q I N G TA O
College of Staten Island, CUNY, USA
H A I WA N G Y U A N
Western Kentucky University, USA
L I Z U O
United Nations, New York, USA
G A OY I N Q I A N
Lehman College, CUNY, USA
B R U C E M U R R AY
Auburn University, USA
JRIE
41. J O U R N A L O F R E S E A R C H I N
I N T E R N A T I O N A L E D U C A T I O N
& 2 0 0 6 I N T E R N A T I O N A L
B A C C A L A U R E A T E O R G A N I Z A T I O N
( w w w.i b o. o r g )
and S A G E P U B L I C A T I O N S
( w w w. s ag e p u b l i c a t i o n s . c o m )
VOL 5(3) 269–299 ISSN 1475-2409
DOI: 10.1177/1475240906069449
This article investigates China’s middle
school and secondary school teacher
expectations of student book use as an
aspect of learning environments. A
questionnaire was used to probe the
following teacher expectations: physical
accessibility of books, homework, mastery
of texts and types of extra-curricular
reading materials. Results showed China’s
teachers believed in mastering exemplary
texts and the moral value of reading
materials. Homework was viewed as a
means to enhance student learning. The
findings of the article can help educators
understand better the learning
environments of Chinese students and can
offer a critical comparison of learning
environments across cultures.
42. K E Y W O R D S Chinese teachers, middle schools,
secondary schools, student book use, teacher
expectations
Cette étude analyse ce que les enseignants des établissements
d’enseignement secondaire (inférieur et supérieur) en Chine
attendent
des élèves en matière d’utilisation de livres comme aspect des
environnements d’apprentissage. Un questionnaire a ainsi été
utilisé
pour mieux connaı̂ tre les attentes des enseignants en termes
d’accessibilité physique des ouvrages, de travail à la maison, de
connaissance approfondie des textes et de types de documents
lus en
dehors du programme. Les résultats ont montré que les
enseignants
des établissements concernés par cette étude accordent de
l’importance
à la connaissance approfondie des textes de référence et à la
valeur
morale des documents de lecture. Le travail à la maison est
quant à
lui considéré comme un moyen permettant d’améliorer
l’apprentissage des élèves. Les résultats de cette étude peuvent
43. aider les
professionnels de l’éducation à mieux comprendre les
environnements
d’apprentissage des élèves chinois et apporter un élément de
comparaison pour étudier les environnements d’apprentissage de
différentes cultures.
En este estudio se investigan las expectativas de los profesores
de
colegios de secundaria en China con respecto al uso de los
libros por
parte de los alumnos como un aspecto del entorno de
aprendizaje. Se
utilizó un cuestionario para indagar sobre las siguientes
expectativas
de los docentes: accesibilidad fı́sica de los libros, tareas para la
casa,
conocimiento profundo de los textos y tipos de materiales de
lectura
extracurriculares. Los resultados mostraron que los profesores
de
Introduction
44. Teachers influence student learning in school at two levels.
First, teachers
explicitly teach curricula. Their teaching directly influences
how well
students learn planned objectives. Second, teachers are
responsible for
creating classroom learning environments. Teachers affect
students’ learn-
ing by holding expectations about students (Rosenthal and
Jacobson,
1968), by selecting and using textbooks and by making books
available
to students (Worthy et al., 1999). Examining teacher
expectations of
students can highlight the context that structures and shapes
student learn-
ing, thus helping us provide optimal learning environments for
student
success.
This article examines Chinese middle school and high school
teachers’
expectations of students’ use of books. Understanding these
expectations
will benefit teachers and educators responsible for an increasing
number
of Chinese students worldwide. Teachers in host countries need
to under-
stand not only who their students are, but also to understand the
learning
environments that have shaped their Chinese cultural
perspectives, allow-
ing teachers to adapt curricula to capitalize on their students’
prior learning
experiences and to maximize their success. An understanding of
Chinese
45. teachers’ expectations of students’ use of books should offer a
glimpse
into the learning contexts in which Chinese students have been
nurtured.
In a broad sense, we examine the educational practices and
ideas of a cul-
ture with a long history of education. Such a comparison
promotes serious
reflections for educational rethinking and reform.
The article is organized as follows. First, we introduce some
background
as to why this issue of teacher perspective on book use is
important. The
introduction will be situated in a literature review examining
both learning
environments in general and China’s educational context in
particular.
Second, we present the research methodology used to explore
teacher
expectations. Third, we present and discuss our results. In light
of the
study’s limitations, we conclude with a brief summary and
discuss the
implications for an international education audience.
270
Journal of Research in International Education 5(3)
los colegios estudiados valoran el conocimiento profundo de
textos
ejemplares y el contenido moral de los materiales de lectura.
Las
46. tareas para realizar en casa se ven como un medio de mejorar el
aprendizaje de los alumnos. Las conclusiones de este estudio
pueden
ayudar a los educadores a comprender mejor los entornos de
aprendizaje de los alumnos de China y ofrecer un punto de
referencia
crı́tico para comparar entornos similares en otras culturas.
Background
Learning environments and students’ achievement
Educational researchers around the world have given extensive
attention to
the effect of learning environments on student learning (e.g.
Cavanagh and
Waugh, 2004; Eisner, 1985; Fraser, 1994; Roelofs et al., 2003;
Webster and
Fisher, 2003; Wubbels et al., 1997). Researchers have closely
examined
many factors, ranging from different effects of classroom and
school
level climates (Fraser, 1994; Freiberg, 1998), to interpersonal
skills of
teachers (Wubbels and Levy, 1993 cited in Khine and Fisher,
2003), to
perceptions of participants interacting in learning environments
(Fraser
and O’Brien, 1985). Vygotsky’s social cognitive approach,
social construc-
tivism and the earlier seminal work of field theory (Lewin, 1936
47. cited in
Fraser, 1994) have provided theoretical frameworks for
researchers explor-
ing the importance of learning environments on knowledge
development
and acquisition.
Among various influential factors in students’ learning
environments,
teacher expectations have been found to be positively associated
with
students’ learning outcomes and attitudes toward learning
(Hernandez,
2001; Rosenthal and Jacobson, 1968). In examining learning as
a result
of complex interactions of factors in classrooms, American
researchers
Rosenthal and Jacobson (1968) found that teacher expectations
of students
measurably affect learning outcomes. High and low teacher
expectations
are associated with correspondingly higher or lower
achievement levels.
Hernandez (2001) likewise highlights the facilitative function of
teacher
expectations in Hispanic minority students. In the UK, Mujis
and Reynolds
(2002) found that teachers’ behaviors and beliefs have both
direct and
indirect influence on students’ mathematics achievement. In
Korean educa-
tion, Lee (1996) found evidence that teacher behaviors are
directly affected
by their instructional beliefs. A recent Australian study by
Cavanagh and
Waugh (2004) further confirmed the positive correlation
48. between teacher
expectations and students’ formal learning outcomes, pointing
toward the
importance of building school and classroom cultures that are
optimally
congenial to students’ learning growth. Webster and Fisher
(2003) found
correlations between Australian students’ achievement in
mathematics
and science and their school culture and environments. New
Zealand
researchers Anderson et al. (2004) reported that students’
classroom parti-
cipation, task engagement and task completion are significantly
related to
classroom learning environments. As is evident, environment–
achievement
correlations from educational research around the world confirm
the
importance of learning environments on student learning.
Tao et al.: Teacher expectations of student reading
271
Important as learning environments are for student learning,
they are
not part of the explicit curriculum in school. Rather, the
creation of a learn-
ing environment in the classroom is part of the implicit (or
hidden) cur-
riculum, which is ordinarily the work of classroom teachers
expressing
their own educational theories and epistemological beliefs
49. (Marra, 2005).
Studies from a critical stance of curricular and school reform
have exam-
ined and highlighted the influence of a learning environment as
an implicit
curriculum (Good, 1987; King, 1986; Wren, 1999). Implicit
curricula
permeate school culture; they include such factors as social
interactions,
student perceptions, teacher expectations and the availability of
support
resources. To critical theorists, the implicit curriculum is
invisible yet
present, and can impose an obstructive role to school reform if
not under-
stood and confronted. When examined together with the
literature on
learning environments, the implicit-curriculum perspective
unequivocally
highlights the need for educators and educational researchers to
look into
the broad context of student learning to understand its impact
on educa-
tional achievement. In the case of China, researchers have yet to
explore
learning environments from an implicit perspective.
China’s learning environments in perspective: teachers and
books
In the long history of Chinese education, the teacher has played
a crucial
role both as a knowledge transmitter and as a model for
students. Tradition-
ally, teachers in China always held a position of authority to
elucidate and
pass on the knowledge and principles of classics to their
50. students (Gardner,
1990). Han Yu (768–824 AD), one of the renowned Confucian
scholars in
the Tang Dynasty, once succinctly defined the role of a teacher
as being that
of ‘imparting the Way, delivering knowledge, and clarifying
confusions’
(Shanghai Dictionary Press, 2002: 128). Further, a teacher was
expected
to be a moral person serving as a model for students in pursuit
of knowl-
edge. Such cultural attribution elevated teachers to a prominent
position in
traditional Chinese education, and vested in them an absolute
authority
over students’ school learning. Explicit curricula were thus
passed on from
teachers to students.
Explicit curricula in the past were dominated by Confucian
ideology
and the classics (Lee, 2000), which, to oversimplify,
emphasized morality,
virtue, personality cultivation and historical scholarship over
natural
sciences and mathematics (Feng, 1990). Explicit curricula were
further
consolidated through the powerful system of high stakes civil
service
examinations, which tied school learning with social success.
Standard
272
Journal of Research in International Education 5(3)
51. examination practices such as requiring accurate reproduction
of classic
works (Tie Jin) and imposing a set formula for essay writing
(Ba Gu Wen)
in the civil service examinations heightened the role of the
classics in tradi-
tional curricula. Of course, modern Chinese education has
consciously
departed from the Confucian focus (You, 1998). Comprehensive
curricula
in modern grade schools include math, sciences, Chinese,
humanities, etc.
– comparable to what is typically offered internationally today.
However, the changing content of the explicit curriculum has
not
threatened the position of authority of teachers as knowledge
transmitters
and as models of learning, which remains a centerpiece in
modern Chinese
educational practice (e.g. Ingulsrud and Allen, 1999). Since
teachers have
such high status and authority over student learning in China,
their voiced
and unvoiced expectations of student reading affect student
learning in
school. Their expectations embody their beliefs and values as to
how to
use textbooks and what types of books are useful for students
for extra-
curricular reading, thus making teacher expectations about the
use of
books and textbooks a powerful part of the implicit curriculum
in China.
52. Books in China have always had a unique role in education.
Traditionally,
the Confucian classics were the curriculum. Classical books
were the central
vehicle in traditional education, and they were viewed as the
essential
means of preserving and transmitting Confucius moral and
philosophical
beliefs (Lee, 1995). Indeed, the civil service examinations that
lasted for
1300 years only enhanced the central role of classic books:
generations
of aspiring scholars were educated in the classics and classical
commen-
taries. In Chinese culture, studying the classical books earned
an educated
person a title of esteem: du-shu-ren, or literally, a ‘person who
reads books’.
This honorific signifies a fundamental understanding of
traditional Chinese
education hinging upon the importance of the classical books.1
Modern Chinese education has by no means relinquished this
focus on
books. Some recent reports have documented the centrality of
textbooks in
China’s classroom learning (Wu et al., 1999) as well as the
importance of
exemplary texts in Chinese proficiency standards (Shanghai
Elementary and
Middle School Instructional Material Reform Research Office,
1999). It is
evident that books continue to enjoy a central place in Chinese
education.
When a reverence for authoritative books is combined with the
53. tradition-
ally revered authority of teachers, we would expect students’
learning in
school to be strongly influenced both by the content of
textbooks, that
is, the explicit curriculum, and by the expectations of teachers
as to how
books should be used and learned, that is, the implicit
curriculum.
Although there are observational reports about students and
their learn-
ing environments in China’s classrooms (Ingulsrud and Allen,
1999), few
Tao et al.: Teacher expectations of student reading
273
studies have explored Chinese middle and high school teachers’
expecta-
tions of student book use, a component of the implicit
curriculum.2 We
believe such studies could help us better understand the
classroom learning
environment, the reigning educational philosophy, and the
behavior of
students, and contribute to a literature that broadens the
perspectives of
educators across cultures and countries.
This article explores Chinese teacher expectations from the
teachers’
own perspectives. In particular, we asked the following research
54. questions:
(1) What are Chinese middle school and high school teachers’
expectations of student book use as a learning tool?
Specifically,
what are their expectations about homework and text mastery?
(2) What do teachers expect their students to read outside of
their classes?
What do they hope students to accomplish with outside reading?
Methods
A 49-item questionnaire, in Chinese, was designed to explore
Chinese
teacher expectations of student book use (see the Appendix).
We relied on
previous studies of book choices and of China’s traditional uses
and beliefs
about books (Tao and Townsend, 1994; Tao and Zuo, 1997) as
the basis for
item construction. The relevant question items focused on (1)
expectations
of students’ physical access to textbooks; (2) homework
expectations; (3)
expectations of mastery of book knowledge; and (4)
expectations of types
of extracurricular reading materials.
We heeded expert advice on survey questionnaire construction
(Bau-
mann and Bason, 2004; Rea and Parker, 1997) by taking the
following
steps. First, items eliciting answers on a similar underlying
aspect were
interspersed across the questionnaire. For example, items on
55. homework
were scattered throughout the questionnaire. Second, we
employed reverse
verbal statements (both negative and positive) to break a
possible response
set. Third, we used a variety of questions, including Likert-
scale items,
multiple-choice questions, and open-ended questions. The 35
Likert
items were scaled 1 through 5 with 1 being ‘strongly agree’, 3
being ‘neu-
tral’ and 5 being ‘strongly disagree’. Eighteen of these
questions were
included in the analysis; the other 17 are not used. The five
multiple-
choice questions had an unequal number of choices ranging
from two
to five items. The three open-ended questions required brief
answers to
follow up responses to multiple-choice questions or allowed
teachers to
supply their own choices. We also asked six questions at the
beginning
of the survey to collect demographic information.
274
Journal of Research in International Education 5(3)
The first two authors examined the draft version of the
questionnaire
independently to remove ambiguous or confusing words and
phrases
and to finalize the choice of items. Because dialect differences
56. in Chinese
expressions could result in misinterpretation by readers from
different
dialectal zones, we paid special attention to possible dialect
ambiguity. In
addition, because some items were first written in English and
then trans-
lated into Chinese, we took pains to ensure that translation or
rephrasing in
Chinese retained the original meaning in English while still
sounding idio-
matic in Chinese. For example, for the item ‘I think as long as
students are
reading, they’ll get the benefit’, we used a more idiomatic
Chinese expres-
sion than the literal English translation. We reached consensus
on these
items through discussion. The third author, a professional
translator of
English and Chinese at the United Nations, served as an
additional check
by answering each item during a 30-minute session. Her
suggestions
were incorporated into the final version, which was produced
during
several sessions of item-by-item discussion among the authors.
The whole
process of constructing the questionnaire took about two
months. The
Chinese version of the questionnaire is available upon request.
Cautions about the limitations of this study are warranted. First,
in the
Likert-scale items there is a neutral response. In the original
Chinese, the
wording ‘liangke’, meaning ‘both can do’, is a non-committal
57. response.
Some psychometricians argue for its inclusion to avoid a forced
division
along the scale of response, but we suspect that including a non-
committal
position may have increased teachers’ tendency to opt out of
more direc-
tional responses, which may have skewed the data toward non-
commital
positions. Second, use of an unpiloted survey may run the risk
of misinter-
pretation by respondents because of residual ambiguity in
wording, thus
affecting the information that we want to collect. However,
given the
multiple screening during item development, we believe this
risk to be
small. Third, because of logistic difficulties, the sample was not
as large
as we had hoped and was not randomized, which limits its
power for
statistical analysis. We list other limitations in the conclusion.
Ninety-three teachers from five public middle schools and high
schools
within a large school district in the Tianjin municipality
participated in the
study. Tianjin is one of the largest metropolitan areas in China.
The teachers
were recruited through mail and follow-up telephone calls as
voluntary
participants. The content areas they taught included Chinese,
foreign
languages, political science, mathematics, computer science,
accounting,
economics, history and physical education. They averaged 9.6
58. years of
teaching experience. Since most content areas in China are
taught through
textbooks and other source materials such as supplementary
readings, we
Tao et al.: Teacher expectations of student reading
275
deemed it appropriate to contact all secondary content-area
teachers in the
final pool of potential participants.
We sent 200 questionnaires to teachers in May, and 93 were
returned.
The return rate of 46.5 percent is deemed satisfactory based on
average
return rates for surveys (Weisberg et al., 1996).
We analyzed the data both qualitatively and quantitatively.
Qualitative
data analysis was conducted in two parts. We first scrutinized
the supplied
responses to the open-ended items. Since the open-ended
questions were
answered only briefly and usually in phrases, we were able to
copy and
chart the answers onto chart paper for a holistic inspection. We
analyzed
responses to closed items (Likert scale and multiple choice)
with frequency
counts, categorizing the frequency of each response for a
particular item.
59. A factor analysis was run on SPSS (Version 10) to verify the
existence of
distinctive factors. Results of the factor analysis were not
significant in iden-
tifying distinctive factors and will not be reported here.
Results and discussion
We present and discuss the survey results in the order of our
research ques-
tions. Results are summarized in tables where possible. Since
not all parti-
cipants responded to all items, the number of respondents varies
across
items. The percentage for each individual item is based on the
valid
responses to that item only. Our ensuing discussions will be
situated,
where appropriate, in the historical and cultural context of
educational
traditions in China.
Research question 1: What are Chinese middle school and
high school teachers’ expectations of student book use as a
learning tool? Specifically, what are their expectations
about homework and text mastery?
We used the following constructs to categorize teacher
expectations of
student book use: expectations of students’ physical access to
textbooks;
homework expectations; and expectations of mastery of book
knowledge.
These constructs categorize teacher expectations of book use
from the con-
crete to the illusive: from physical possession of books to the
60. role books
play in school learning.
Expectations of students’ physical access to textbooks Some
back-
ground knowledge on Chinese customs with textbooks may be
helpful.
Chinese students are required to purchase textbooks and other
supple-
mentary materials. The textbooks are paperbacks with an
average length of
276
Journal of Research in International Education 5(3)
200 pages, supplemented by workbooks commonly required by
teachers in
each content area. Students have no lockers in which to keep
their books in
school. They have only an individual desk with storage space
designated for
each student during the school day, which may have to be
shared with
another student. Accordingly, Chinese students from elementary
school
onward are expected to carry all their books to and from school
in their
backpacks.
Teachers’ estimates of the number of books students carried in
their
backpacks are reported in Table 1. Estimates tended to skew
toward the
61. higher end, with 32 of 89 teachers estimating students carry
more than
nine books. Only five teachers thought students carry only one
or two
books. The responses to a related item, ‘I believe students
should leave
their textbooks at school’, are summarized in Table 2.
Responses clustered
around ‘disagree’ and ‘neutral’. Six out of 90 teachers strongly
disagree
with the statement; 28 teachers disagree and 38 teachers remain
neutral;
seven teachers agree and 11 teachers strongly agree. Though
negative
views tended to outweigh positive, more teachers with strong
opinions
on this question think students should leave their textbooks at
school.
Although most report students carrying many books home, and
express
approval of this homework burden, contrary views might be
signaling a
different understanding of the use of textbooks at home.
Discussions on
homework in the following section will elaborate further on the
relevance
of carrying textbooks home.
Homework expectations The following three items ask for
teachers’
beliefs about textbook use, eliciting their homework
expectations:
‘I believe the primary function of homework is to reinforce and
digest
the content taught in the classroom’; ‘I believe that homework
helps
62. students consolidate learning’; and ‘I believe homework should
focus on
Tao et al.: Teacher expectations of student reading
277
Table 1 Teacher estimates of how many books students carried
between school and
home everyday
1 to 2
books
3 to 4
books
5 to 6
books
7 to 8
books
9 and above
books
Number of
valid
63. responses
Number of
responses
5.7 16 19.7 17.7 32 88
Percentage
of responses
5.7 18 21.5 19.3 36
broadening students’ knowledge’. As can be seen from Table 2,
the
responses to these three items are skewed towards ‘strongly
agree’ and
‘agree’. Specifically, about 70 percent of the teachers either
agreed or
strongly agreed that homework helps to enhance their students’
learning.
Only eight teachers disagreed. Neutral responses were given by
15 teachers
for the first two statements, but 26 teachers were non-commital
on the
third statement. These data imply most teachers believe
homework
useful, but most see its use in reinforcing classroom learning
rather than
to broaden students’ knowledge (68 versus 52). The same trend
is seen
in the numbers of neutral responses to the three statements:
fewer teachers
64. are neutral about homework use to consolidate class learning
than about
using homework to broaden knowledge (15 versus 26).
While physical ownership of textbooks by students is universal
in China,
teachers are split in their views on whether their students should
take books
home. This is surprising given that a majority of teachers
acknowledge
their students carry more than five books home. Since Chinese
students
have no lockers in school, they usually carry them in their book
bags to
and from school. This might indicate reduced teacher
expectations of
278
Journal of Research in International Education 5(3)
Table 2 Frequency count and number of responses to focused
questions
Item statements Strongly
agree
% (n)
Agree
% (n)
Neutral
65. % (n)
Disagree
% (n)
Strongly
disagree
% (n)
Number of
valid
responses
I believe students should
leave their textbooks at
school
12.2 (11) 7.8 (7) 42.2 (38) 31.1 (28) 6.7 (6) 90
I believe the primary
function of homework is
to reinforce and digest
the content taught in
the classroom
66. 29.7 (27) 45.1 (41) 16.5 (15) 8.8 (8) .60 (8) 91
I believe that homework
helps students
consolidate learning
21.3 (19) 52.8 (47) 16.9 (15) 7.9 (7) 1.1 (1) 89
I believe homework
should focus on
broadening students’
knowledge
23.3 (20) 37.2 (32) 30.2 (26) 9.3 (8) .60 (8) 86
their students reading textbooks at home. However, this is at
odds with the
fact that more than 70 percent of the teachers believe homework
helps
students either enhance their classroom learning or broaden
their knowl-
edge. Less than 10 percent of the teachers do not think
homework is impor-
tant in student learning.
To put the issue in perspective, homework is virtually universal
in China,
almost a certain extension of school routine. Either teachers
assign home-
67. work or students work on supplementary workbooks themselves
on
teachers’ or parents’ advice. Most Chinese parents spend time
supervising
their children’s homework or assign their own (Chao, 1996; Su,
2000).
Parents may even be upset when their children do not have
schoolwork.
Teachers commonly assign exercises in textbooks or
supplementary
materials (Wu et al., 1999), which would be included in the
books carried
by students in their book bags. Thus, there is a contradiction
between
teacher perception of students’ high need for homework and
their low
need to carry their books home.
One plausible explanation for this discrepancy is that different
subject
areas might require a different amount of homework to
supplement class-
room learning. Sometimes adequate practice might be
accomplished at
school, while other areas might require more extensive practice
at home.
For example, physical education teachers likely require little or
no home-
work from their students, while mathematics teachers regularly
assign it.
To check this explanation, we revisited the original survey and
examined
the responses of physical education and mathematics teachers to
compare
the average answers to the item in question. Physical education
teachers
68. scored an average of 2.2 points (along the 1–5 Likert scale from
strongly
agree to strongly disagree) on the item stating students should
leave
their books in school, while the math teachers scored an average
of
3.3 along the scale. In other words, physical education teachers
do not
insist that their students should take their work home to the
same extent
as their math colleagues do. Though the difference in average
frequency
does not reach any statistical significance due to the small
number of
teachers in each content area, it suggests a comparative analysis
of subject
areas in future research.
We noticed that teachers’ typical understanding of homework is
quite
traditional. Most teachers (75 percent) expect homework to
supplement
classroom instruction, while 60 percent expect homework to be
used to
broaden students’ knowledge. While this is not a great
difference, it is
interesting to note that these teachers tend to view homework
more as a
tool to extend classroom instruction than as an opportunity to
expand
learning. Such a narrow view of the function of homework
would not
Tao et al.: Teacher expectations of student reading
279
69. recommend, for instance, extensive reading beyond the
classroom to com-
pete with the time for textbooks. This suggests that the central
role of text-
books in traditional Chinese education continues today, at least
in the
minds of many of the study’s respondents.
Expectations of mastery of book knowledge Six items were used
to
probe teacher perceptions of text mastery. They focused on the
practice of
repeated reading and recitation. The six items are:
(1) I require my students to get familiar with texts through
repeated
reading.
(2) I believe text recitation can enhance students’ reading
proficiency.
(3) I believe that in order to master the content of a text,
students
should first recite it.
(4) I allocate class time for students to practice reciting texts.
(5) I believe recitation of exemplary texts can enhance students’
writing
ability.
(6) I am against requiring students to recite texts.
Some variations were observed in teachers’ responses to these
state-
70. ments. As can be seen from Table 3, a majority of teachers (76
of 89)
either ‘agree’ or ‘strongly agree’ that they require their students
to get
familiar with texts by repeated reading. For the second, third,
and fourth
statements, the number of teachers who are in favor drops
sharply.
Statement 5 about recitation of exemplary texts enhancing
writing has a
different distribution. Half of the teachers agree or strongly
agree, and
very few disagree or strongly disagree. Statement 6 against text
recitation
has almost an equal number of teachers on either side of the
argument,
a distribution not seen in the responses to the previous
statements in this
section. These results show that while teachers are ready to
acknowledge
the role of repeated reading in familiarizing students with the
texts to be
learned, many of them are not convinced about the higher level
effect
recitation or memorization might achieve by enhancing writing
ability,
reading proficiency or comprehension.
Even so, about half of the teachers think that recitation is
important, and
only about one-fifth disapprove of recitation as a learning
method. This is
consistent with observations reported in other studies (Cheng,
1993;
Doolin and Ridley, 1968; Petri, 1984; Unger, 1977). Teachers in
China’s
71. middle and high schools require students to be familiar with
texts through
repeated reading and recitation. Teachers see recitation as a
necessary means
to master the content taught. The fact that nine more teachers
support
280
Journal of Research in International Education 5(3)
committing exemplary texts to memory rather than a blanket
approval of
memorizing any text (54 versus 45, or 60% versus 50%) might
point to
their understanding of the purpose of memorization. Their
interest in
memorizing exemplary texts is in line with the traditional belief
that
exemplary texts can serve as a vehicle to master and transfer
knowledge
(Confucius, 500 BC [1990]; Gardner 1990). This emphasis on
mastery
through repeated reading and memorization reflects a time-
honored prac-
tice in Chinese education: mastery of books recognized as
master works.
Summarizing responses to research question 1, we found that
most
Chinese middle school and high school teachers expect students
to use
textbooks for homework and view homework as an important
adjunct to
72. students’ learning at school, though that expectation might be in
decline.
Tao et al.: Teacher expectations of student reading
281
Table 3 Frequency count and number of responses to focused
questions
Item statements Strongly
agree
% (n)
Agree
% (n)
Neutral
% (n)
Disagree
% (n)
Strongly
disagree
% (n)
Number of
73. valid
responses
I require my students
to get familiar with
texts through repeated
reading
23.6 (21) 61.8 (55) 11.2 (10) 2.2 (2) 1.1 (1) 89
I believe recitation can
enhance students’
reading proficiency
16.9 (15) 33.7 (30) 30.3 (27) 19.1 (17) .60 (8) 89
I believe that in order
to master the content
of a text, students
should first recite it
14.9 (13) 31.0 (27) 32.2 (28) 20.7 (18) 1.1 (1) 87
I allocate class time for
students to practice
74. reciting texts
12.5 (11) 37.5 (33) 28.4 (25) 20.5 (18) 1.1 (1) 88
I believe recitation of
exemplary texts can
enhance students’
writing ability
19.1 (17) 41.6 (37) 30.3 (27) 5.6 (5) 3.4 (3) 89
I am against requiring
students to recite texts
9.1 (8) 33.0 (29) 17.0 (15) 35.2 (31) 5.7 (5) 88
In addition, most Chinese teachers expect students to master
exemplary
texts through recitation or repeated reading.
Research question 2: What do teachers expect their
students to read outside of their classes? What do they hope
students to accomplish with outside reading?
We used the following constructs to categorize aspects of
teachers’
responses related to this research question: understanding of
students’
reading abilities; attitudes toward outside reading; and
evaluation of texts
for outside reading. These constructs help illuminate teachers’
75. understand-
282
Journal of Research in International Education 5(3)
Table 4 Frequency count and number of responses to focused
questions
Item statements Strongly
agree
% (n)
Agree
% (n)
Neutral
% (n)
Disagree
% (n)
Strongly
disagree
% (n)
Number of
valid
76. responses
Reading ability
My students have
difficulty reading
their textbooks
3.5 (3) 17.6 (15) 25.9 (22) 48.2 (41) 4.7 (4) 85
I believe my students
have very strong
reading ability
14.0 (12) 26.7 (23) 43.0 (37) 16.3 (14) .60 (8) 86
Teacher attitudes
I think as long as
students are reading,
they’ll have the
benefit
43.3 (39) 30.0 (27) 10.0 (9) 14.4 (13) 2.2 (2) 90
I encourage my
students to read a lot
77. of extracurricular
materials
28.4 (25) 56.8 (50) 14.8 (13) .60 (8) .60 (8) 88
I often recommend
books for my students
to read after class
38.5 (35) 37.4 (34) 19.8 (18) 3.3 (3) 1.1 (1) 91
I often recommend
difficult reading
materials for my
students to read after
class
14 (12) 29.1 (25) 38.4 (33) 18.6 (16) .60 (8) 86
continued on next page
ing about whether students are capable of reading on their own,
their
beliefs about the value of extracurricular reading, and their
evaluation of
what students are reading. The items for each construct are
78. listed below
along with the frequency counts.
Reading ability As seen in the responses to the items ‘My
students have
difficulty reading their textbooks’ and ‘I believe my students
have very
strong reading ability’, about 40 percent of teachers see their
students as
very strong readers. More than half the teachers reject the
notion that
their students have trouble reading textbooks. Only one in five
teachers
see their students having difficulty reading textbooks, and
only16 percent
express some doubt about their students’ reading ability (with
no one
strongly contesting their students’ ‘very strong reading
ability’). Thus,
teachers in our sample appear to have moderate confidence in
their
students’ reading ability. They believe a majority of their
students are per-
forming at an acceptable level. This teacher confidence might
support a
willingness to recommend books to students for outside
independent
reading.
Teacher attitudes We probed teacher attitudes toward
extracurricular
readings regarding two associated aspects: teachers’ general
attitudes
toward extensive reading; and their specific actions related to
extracurricu-
lar readings. General attitudes toward broad reading are
79. reflected in the
Tao et al.: Teacher expectations of student reading
283
Table 4 continued
Item statements Strongly
agree
% (n)
Agree
% (n)
Neutral
% (n)
Disagree
% (n)
Strongly
disagree
% (n)
Number of
valid
80. responses
Teachers’ evaluation of
outside reading
materials
I don’t think popular
reading materials on
the market are
appropriate for my
students
44.9 (40) 34.8 (31) 10.1 (9) 6.7 (6) 3.4 (3) 89
I believe many
popular reading
materials are a waste
of time
28.1 (25) 29.2 (26) 22.5 (20) 16.9 (15) 3.4 (3) 89
statement, ‘I think as long as students are reading, they’ll get
the benefit.’
Teachers’ specific actions related to extracurricular readings are
captured
81. in the following three statements: ‘I encourage my students to
read a lot
of extracurricular materials’, ‘I often recommend
extracurricular reading
materials to my students’ and ‘I often recommend difficult
reading
materials for my students to read after class’. As can be seen
from Table 4,
responses were fairly consistent across these statements. A
majority of
teachers would encourage students to read extracurricular
materials exten-
sively, and they report that they often recommend such
materials to their
students. The only discrepancy occurs when it comes to
recommending
difficult materials for free-time reading. Less than half of the
teachers
agree or strongly agree with the statement.
Given the ubiquity of printed texts in modern society, it is
unlikely that
teachers would be averse to exposing their students to works
beyond their
textbooks, such as poems, novels, and autobiographies. In our
survey, most
teachers agreed that students benefit from independent reading,
though they
would be reluctant to recommend difficult reading materials
that might
challenge students beyond their ability, so as to dampen their
interest. This
endorsement of extensive reading indicates some change in the
mentality
that traditionally limited reading to exemplary texts.
82. Teachers’ evaluation of what their students are reading outside
This topic comprises two aspects: teachers’ perception of the
value of
popular reading materials on the market; and their perception of
the
types of extracurricular materials students choose.
Teachers’ perceptions of the value of popular reading materials
were
probed by two items: ‘I don’t think popular reading materials on
the
market are appropriate for my students’ and ‘I believe many
popular read-
ing materials are a waste of time’. About 79 percent of teachers
think popu-
lar reading materials inappropriate for students (compared to 10
percent
approval) and 57 percent think them a waste of time (versus 21
percent
who see value in them). While these items might be
distinguished in
nuance, teachers’ responses evinced a similar attitudinal trend.
Many
teachers have concerns for popular reading materials they
believe are
inappropriate for their students, but fewer think reading popular
materials
is entirely frivolous.
Two open-ended questions were used to follow up responses
about
extracurricular reading materials: ‘To me, extracurricular
materials are as
follows’, and ‘My students’ extracurricular materials include
the follow-
ing’. In our sample, 59 teachers answered both questions, and
83. another
14 answered one of the two. Their answers typically give a
descriptive
284
Journal of Research in International Education 5(3)
label or titles of books and magazines (see Table 5). Because of
the sporadic
nature of the answers to these two questions, we report answers
by cate-
gory rather than as frequency counts. Table 6 lists teachers’
perception of
appropriate extracurricular reading materials in juxtaposition
with what
they see their students reading. There are some clear
discrepancies between
the two. Fifteen teachers mentioned classics as appropriate
extracurricular
readings for their students, but only one teacher believes
students are actu-
ally reading classics on their own. Since our data in this study
capture
teacher perceptions rather than the actual reading practices of
students,
Tao et al.: Teacher expectations of student reading
285
Table 5 Extra curricular reading expected by teachers versus
what teachers thought
84. their students actually read outside class
Teacher expected reading What they thought their students read
Descriptive
examples
. Journals or books that are
beneficial to students and can
broaden their perspectives
. Works that are uplifting and
inspiring
. Books that are for the healthy
development of mind and body
. Stories about heroes
. Those that nurture personality
. (Auto)biographies of famous
persons
. Classical novels
. Reference books for learning and
study
. Martial arts fictions
. Magazines and journals
. Science fictions
85. . Novels
. Books and magazines about movie
stars, pop stars, and computer
software
. Cartoons
. Qiong Yao’s fiction
. Myriads of books
. Different students read very
different things
Title
examples
. Reader’s Digest
. One Hundred Thousand Whys:
A Science Encyclopedia
. Middle School Students
. How to Win Friends and
Influence People by Dale
Carnegie
. The Dream in the Red Mansion
. The Third Wave
86. . Current Affairs
. Contemporary Monthly Novel
. Chinese Self-studies
. Teenage Boys and Girls
. Friends
. Soccer Weekly
. Soccer Fan
. Youth Digest
. Cartoons
. Girl Friends
. Tien Jing Youth Daily
Note: All of the above examples are translated direct quotes
from teachers’ answers to the two open-ended questions.
we do not know whether the teacher estimates of reading
‘classics’ reflect
the reality whether all the students are reading classics or just
some are.
However, the contrast between the number of teachers who
espouse classic
reading and who perceive their students actually reading
classics is huge.