SlideShare a Scribd company logo
1 of 49
[Type text] [Type text] [Type text]
The Fifteen Ethical Traps and Lessons
Learned on Avoiding Them
Student name:
Ethical Trap and Avoidance Mechanism One: Justification
Justification trap involves people justifying bad decisions
and unethical behavior by claiming it is necessary or the need of
the hour. This trap makes people think that the reason they are
being involved in unethical behavior is for greater good that
will come out of it. The world has seen many signs of such
justification traps. Killing a particular group of people that
happen to be from a particular religion has been justified time
and again for saving their own religion.
This trap can be tackled by using reputation perspective.
This perspective calls for a person to take responsibility and do
the right thing at all the times. This perspective has principles
that ask a person to do justice, show integrity and courage.
Ethical Trap and Avoidance Mechanism Two: Money
Money is thought of the means to achieve happiness. It is
thought of as one the goals of life. This trap asks for people to
make sure they get as much money as they can, leaving the way
to achieve it to attain the happiness they require. A person is
judged by money in world and it has replaced all of the
comparison metrics. Earning money fast calls for taking wrong
measures to get easy profits, give people a chance to not pay
taxes and indulge in wrong and illegal activities.
This trap can be tackled by ensuring people are weighed on
the scale of their work, their attitude towards other people and
giving money less weightage in life.
Ethical Trap and Avoidance Mechanism Three: Conflicts of
Interest
Conflicts of Interest trap involves getting in a fix related
to a situation where you get into the middle of it. It means that
there are two parties and you can make sure only one party gets
the benefit while other loses. This conflict of interest is created
when a person solves such conflict by seeing where it would
benefit him the most. This involves taking of bribe from one
party in order to rule the conflict in his favor.
Mill’s principles can make sure conflict of interest is
avoided. Always act what is best as per rules. Integrity and
honesty can make sure the person makes the best decision.
Never accepting any favor and working under rules would make
sure it is avoided at all times.
Ethical Trap and Avoidance Mechanism Four: Faceless victims
This trap involves generalizing victims. By this the
unethical behavior done towards those affected diminishes in
the mind of the person who did it. This trap involves not
picturing the pain of the humans to make it easy for not taking
any responsibility of the damage it caused. People died in a war
referred to just as numbers is a part of this trap.
This trap can be avoided by ensuring that all people are
looked at the same way. The human factor shouldn’t go away
from any victim. Responsibility should be taken and measures
should be taken that damage created can be compensated in
some other way rather than shrugging off. True integrity and
courage principles are required to avoid this trap.
Ethical Trap and Avoidance Mechanism Five: Conformity
This trap is aligning attitude, beliefs and behavior just to
fit in better. If the coworkers don’t work diligently or are not
honest, it becomes a situation for them where they need to do
the same or be nagged all time and left out of the group because
they do something which you are not doing.
Conformity can be avoided by self-satisfaction. If a person
knows he does not require a group to be happy, he or she can
work as they wish and they don’t need to change their beliefs.
Strength and integrity can ensure a person does what is required
at all times and there is no need to be like someone else just to
fit in.
Ethical Trap and Avoidance Mechanism Six: Advantageous
Comparison
This trap deals with comparison one’s action with
something worse than the action. It gives the satisfaction that
what the person did is always better that he could have done and
so the action gets validated since he something better than what
he could have done.
This trap can be avoided by making sure the action is
compared with something of equal nature of higher nature or not
at all. If a person is comparing against a higher action, he will
be knowing what he did wrong and then next time onwards it
won’t be repeated.
Ethical Trap and Avoidance Mechanism Seven: Obedience to
Authority
This trap involves showing obedience to someone who is
having more powers than you. If manager says something we to
do which may be wrong, an employee will show obedience
because non obedience can get him fired. Hence obedience
without knowing the nature of work or consequences of the
action is an ethical trap because of the fear.
This trap can be avoided by making sure that work is done
for the firm’s welfare and not because we need to please
authority. Authority can be asked for explanation as to why a
particular task has to be done. Blind faith on authority can be
avoided by strength to do always right and question when it is
wrong.
Ethical Trap and Avoidance Mechanism Eight: Alcohol
This trap involves taking alcohol to ensure all the bad
feelings are washed away. These feelings come become of doing
something bad and being intoxicated makes it forget
temporarily.
This trap can be avoided by knowing that alcohol is not a
permanent solution. It requires courage and integrity to accept
that something has been done which is not right and it will take
courage to accept and work on it to make sure such things are
not repeated.
Ethical Trap and Avoidance Mechanism Nine: Contempt for
Victim
This trap involves dehumanizing victims to make it easier to
harm them. When victims are looked as just numbers and
employees are looked upon as hired help, it gives the authority
power to look humans as just people.
This can be avoided by seeing people as human. This will
require inner strength and the ability to see the impact of the
harm we are going to do upon them. Keeping ourselves in their
position can give a reality check which can ensure they are
looked as humans only in future.
Ethical Trap and Avoidance Mechanism Ten: Competition
This trap involves competition between two or more people
or parties. Competition is a feeling of going ahead of other.
This competition gets so fierce sometimes which involves one
party or both party sometimes to break rules and move on to
unethical practices to harm other and get ahead of the other.
This trap can be avoided by having a mutual respect among
the parties. Both competitors can coexist only if they have
mutual respect. Competition should be done in a fair manner
and it is good competition if everything is done under rules and
regulations.
Ethical Trap and Avoidance Mechanism Eleven: We Won’t get
Caught
This trap related to the feeling of self-denial that anything
would happen to us if we are doing something wrong. Lack of
faith in justice or crooked system often enables people to get
this feeling that they are doing unethical practices in the safest
way possible and it won’t be traced back to them.
This can be avoided by always seeing that justice will
prevail sooner or later. Even if the justice fails to catch them,
they will be haunted by their inner conscience. Integrity and
honesty in the work will ensure that people always do the right
thing and they won’t ever get the feeling that we won’t be
caught since they would always know what they did.
Ethical Trap and Avoidance Mechanism Twelve: Anger
This trap involves covering up fear by showing hostility
towards anyone. Anger covers guilt but it also keeps people at
distance from the instance anger is showed on. Anger is a very
powerful emotion which can lead people to be aggressive in
nature.
This can be avoided by having sympathy and love towards
every person. When anger itself gets weak then a person will
have no fear in showing the guilt he committed towards the
person.
Ethical Trap and Avoidance Mechanism Thirteen: Small Steps
This trap involves committing unethical acts in small
steps. This makes the person committing the unethical acts
tolerant of the unethical nature of the steps. It becomes more
severe since a person gets accustomed to the unethicality of it
and he keeps raising the bar of the small step for himself.
This can be avoided in the initial stages itself. Every small
step should be seen as a sign of guilt and it should never be
accepted but should be dealt strongly to ensure the gravity
never increases.
Ethical Trap and Avoidance Mechanism Fourteen: Tyranny of
Goals
This trap asks for people to move fast in order to achieve
their goals. The goals should be achieved even if it means
cutting short on some of the goals to reach the prime goal. This
can involve taking short cut methods and unethical approach
just to finish the work.
This can be avoided by making sure that goals are
completed as they were desired. Nothing should get left behind
nor were any goals modified just to claim goal has been
reached. The quality and the desired form of the goal should be
untouched.
Ethical Trap and Avoidance Mechanism Fifteen: Don’t make
Waves
This trap deals with showing authority to keep everyone
quiet on the subject matter to avoid any suspicion or challenge.
This trap ensures that everyone stays quiet and there are no
measures to unearth a matter which is in suspicion for unethical
behavior.
This trap can be avoided by ensuring that everyone gets a
say in the matter. Meetings are regular and everyone is allowed
to speak their mind without the fear of getting reprimanded in
the later stage. Free thinking and the ability to accept any
wrong doing can ensure that such a trap is avoided.
A Controlled Study of Clicker-Assisted Memory Enhancement
in College Classrooms
AMY M. SHAPIRO1* and LEAMARIE T. GORDON2
1Psychology Department, University of Massachusetts
Dartmouth, Dartmouth, MA, USA
2Psychology Department, Tufts University, Medford, MA, USA
Summary: Personal response systems, commonly called
‘clickers’, are widely used in secondary and post-secondary
classrooms.
Although many studies show they enhance learning,
experimental findings are mixed, and methodological issues
limit their
conclusions. Moreover, prior work has not determined whether
clickers affect cognitive change or simply alert students to
information likely to be on tests. The present investigation used
a highly controlled methodology that removed subject and item
differences from the data to explore the effect of clicker
questions on memory for targeted facts in a live classroom and
to gain
a window on the cognitive processes affecting the outcome. We
found that in-class clicker questions given in a university
psychology class augmented performance on delayed exam
questions by 10–13%. Experimental results and a class survey
indicate
that it is unlikely that the observed effects can be attributed
solely to attention grabbing. Rather, the data suggest the
technology
invokes the testing effect. Copyright © 2012 John Wiley &
Sons, Ltd.
Personal response systems allow instructors to present
multiple-choice questions in any classroom equipped with
a digital projection system. Students are required to purchase
a remote (commonly called a ‘clicker’) that allows them to
‘click in’ responses, which are recorded by a receiver. With
the instructor’s remote, a few button clicks allow instant
projection of class responses to provide immediate feedback
to students and also upload students’ responses to a grade
book. Clickers have been used for a variety of educational
purposes including teaching case studies (Brickman, 2006;
Herried, 2006), replicating published studies in class (Cleary,
2008), and electronic testing (Epstein et al., 2002). On the
basis of published reports, however, the most common use
appears to be during lectures for assessing students’ compre-
hension of class material in real time and improving participa-
tion and attendance (Beekes, 2006; Poirier & Feldman, 2007;
Shih, Rogers, Hart, Phillis, & Lavoie, 2008). Although studies
of clicker effectiveness have yielded mixed results (discussed
later), the bulk of evidence indicates that the technology is
effective for enhancing learning. Most prior studies, however,
have only compared clicker classrooms with control classrooms
not using clickers. Further, to our knowledge, no published
work to date has directly explored the cognition underlying
clicker effects.
Here, we take a different approach and examine the effect
of clicker questions on memory for specific bits of factual
knowledge in clicker-assisted classrooms. The present
experiment was designed to answer two questions. Specifi-
cally, does clicker use promote learning in the classroom?
If so, do the observed improvements reflect true cognitive
change or are the enhancements simply a reflection of greater
emphasis placed on clicker-targeted information? To explain
the motivation behind this work, the following section will
briefly review the literature on clicker-assisted learning and
methodological concerns that may limit any conclusions that
can be made. A discussion of cognitive mechanisms that
may explain clicker effects will provide a foundation for
the specific research questions addressed by the study.
CLICKER-ASSISTED LEARNING OUTCOMES
Many studies employing indirect measures of learning have
reported positive effects of clickers, such as class participation
(Draper & Brown, 2004; Stowell & Nelson, 2007; Trees &
Jackson, 2007) and perceptions of learning (Hatch, Jensen, &
Moore, 2005), across various disciplines. Yet others report no
effect of clickers on indirect measures such as attendance,
engagement, or attentiveness (e.g. Morling, McAuliffe, Cohen,
& DiLorenzo, 2008). Such varied results exemplify the array of
findings within clicker literature.
More relevant to the present study, investigations employing
direct learning measures have also yielded somewhat mixed
results. Stowell and Nelson (2007) gave laboratory subjects a
simulated introductory psychology lecture and compared test
performance between groups asked to either use clickers or
do other sorts of participative activities during the lecture. They
found no differences between groups on learning outcome
measures. Kennedy and Cutts (2005), however, observed some
clicker effects but found that the strength of the relationship
between clicker use and learning outcome measures hinged
on how successful students were in answering the clicker
questions. Despite such discouraging reports, the majority of
published studies exploring direct effects of clickers on learn-
ing have yielded positive results. Ribbens (2007) found that
introductory biology students performed 8% better on tests
than his class 2 years prior, before adopting clickers, and
Morling et al. (2008) reported higher mean test scores on two
of four tests in their clicker classes than in their no-clicker
classes. Among studies reporting positive learning effects of
clickers, however, there is no consensus on the nature of the
effect. One area where findings differ is in studies that have
examined the effect of clicker questions on the specific exam
questions they target.
In one such study, Shapiro (2009) integrated clicker
question performance as a graded part of an introductory
psychology course. She targeted specific exam questions with
in-class clicker questions and compared performance on
targeted test questions with that of a control class that did not
use clickers. The lectures, course content, and test questions
were identical between classes. Performance on clicker-
targeted test questions was 20% higher in the experimental
*Correspondence to: Amy M. Shapiro, Psychology Department,
University
of Massachusetts Dartmouth, 285 Old Westport Road,
Dartmouth, MA
02747-2300, USA.
E-mail: [email protected]
Copyright © 2012 John Wiley & Sons, Ltd.
Applied Cognitive Psychology, Appl. Cognit. Psychol. 26: 635–
643 (2012)
Published online 19 June 2012 in Wiley Online Library
(wileyonlinelibrary.com) DOI: 10.1002/acp.2843
class. Of course, class differences could account for some of
the effect. Evidence against group differences was provided
by a set of questions targeted as controls, for which neither
class saw clicker questions. Performance on those items
differed by just under 3% between classes. Although error
due to differences between items chosen for control and test
conditions may have been a factor, the data do point to an
effect of clickers on targeted exam question performance, but
not on untargeted question performance.
Using a similar methodology, Mayer et al. (2009) also
evaluated clicker-assisted learning. In addition to clicker
and control classes, they used a third no-clicker class, which
was given questions on paper to answer at the end of each
class rather than using clickers. Like Shapiro (2009), they
targeted specific exam questions with in-class questions in
both clicker and no-clicker classes. In the primary analysis,
they evaluated overall exam performance, including exam
questions that were directly targeted by clicker questions
(similar items) and those that were not (dissimilar items).
When the total exam score was used as the dependent
measure, students using clickers performed better compared
with those not using clickers. Mayer et al. also conducted a
secondary analysis, however, comparing student perfor-
mance on ‘similar’ versus ‘dissimilar’ exam questions (see
their Table 2). They reported no significant performance
differences on ‘similar’ test questions between the clicker and
no-clicker classes and the control classes. The clicker class,
however, performed significantly better on ‘dissimilar’ items
than the other two classes. It appears, then, that the overall
effect of clickers found in the primary analysis stemmed from
the dissimilar items. Although Mayer et al. and Shapiro both
found that clicker use improved test performance, their
findings do not cohere. Mayer et al. found a positive effect of
clicker questions on untargeted (dissimilar) test items but not
targeted (similar) items, whereas Shapiro found performance
enhancement only on targeted (similar) questions.
The literature on clicker-assisted learning is very inconsis-
tent in its findings. What may be at the root of differential
results among so many studies? A number of important
methodological issues within studies reporting clicker effects
may be the answer. One source may be class or instructor
differences within studies that have compared clicker-adopting
classes with non-adopting classes, as these studies are
vulnerable to the error introduced by individual and group
differences. Item differences between clicker-targeted and
control test items also may be problematic, as lack of counter-
balancing between conditions also introduces error. Moreover,
lack of standardization or control regarding the strength of the
relationships between clicker questions and test questions
creates a potential for variability in strength of the treatment
between items in a study, thus threatening internal validity.
Finally, student motivation varies between studies, as students
may be enticed to participate through varied means such as
extra credit, graded tests, or laboratory credit.
Without access to multiple sites or classes to create a
cluster randomized design (Raudenbush, 1997), it is difficult
to conduct a true experiment in a natural classroom. How-
ever, the present investigation combined a within-subjects
and within-items design that controls variability from both
factors while retaining ecological validity. We know of
no other study of clicker-assisted learning to use such a
design in a study of content learning (but see Stowell, Oldham,
& Bennett, 2010, and Roediger, Agarwal, McDaniel, &
McDermott, 2011). In addition to providing greater control,
the within-subjects design offers a strong test of clicker effects
on targeted material, as clickers will be present in the class-
room throughout the experiment. In this way, the experiment
is not a simple ‘clicker classroom versus no-clicker classroom’
study. Instead, all subjects were exposed to clickers, and the
dependent variable measured the effect of clicker questions
on the acquisition of specific concepts targeted by clicker
questions. If clicker effects are still detected under these
conditions, the study will have found strong evidence for
clicker-enhanced learning of targeted content.
COGNITION AND CLICKERS
As elucidated in the previous section, many researchers have
made claims about the positive effect of clicker technology
on learning and memory. It is possible, however, that clicker
questions merely highlight important ideas for students. In
other words, the effect may come about by prompting
students to direct attention resources to specific items during
class and in subsequent study. Attention is a necessary first
step in creating a memory, so anything that increases
attention holds the possibility of enhancing memory. A
savvy student should be able to glean from in-class questions
the information deemed important by the instructor. It would
make sense to direct study efforts toward those topics. If this
sort of attention grabbing is at the root of the learning
enhancements observed in some clicker studies, the effects
are not particularly interesting from a cognitive or theoretical
point of view. It would also bring into question whether the
effort required to generate clicker questions, not to mention
the expense of the hardware to students, is worthwhile. After
all, it might be just as effective to give students lists of
important topics to attend to in class and during study.
A second and more theoretically interesting possibility,
one that would support the use of clickers as a means of
affecting cognitive change, is that clicker-induced retrieval
acts as a source of memory encoding. Known as the testing
effect, Karpicke, Roediger, and others have documented
that the act of retrieving information from memory can
strengthen memory and improve later recall or recognition
(Butler, Karpicke, & Roediger, 2007; Carrier & Pashler,
1992; Glover, 1989; Karpicke & Roediger, 2007a, 2007b,
2008; Roediger & Karpicke, 2006a). The testing effect has
been demonstrated using free-recall (Jacoby, 1978; Szpunar,
McDermott, & Roediger, 2008), short-answer (Agarwal,
Karpicke, Kang, Roediger, & McDermott, 2008), and multiple-
choice (Duchastel, 1981; Nungester & Duchastel, 1982) tests.
Moreover, it has been shown for various types of materials
such as word lists (Karpicke & Roediger, 2006a; Tulving,
1967), paired associates (Allen, Mahler, & Estes, 1969), and
text (Nungester & Duchastel, 1982; Roediger & Karpicke,
2006a).
Why might testing through clickers (or other means)
strengthen memory? One mechanism through which clickers
might work is to create encoding conditions that mirror those
636 A. M. Shapiro and L. T. Gordon
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
at retrieval, a benefit traditional lecture-based learning does
not offer (Blaxton, 1989; Morris, Bransford, & Franks,
1977). In other words, testing may create conditions under
which transfer-appropriate processing may occur. That is,
answering multiple-choice questions during class or study
may enhance performance on multiple-choice exam ques-
tions. In fact, Nungester and Duchastel (1982) provided
support for the transfer-appropriate processing explanation,
as they found that short-answer and multiple-choice tests
improved later performance on final multiple-choice and
short-answer tests, but only when the formats were matched.
Although transfer-appropriate processing is one reason-
able hypothesis about the testing effect, there is evidence to
support another. Specifically, there is evidence that the
process of retrieval itself strengthens or otherwise alters the
memory trace, a possibility proposed by Bjork (1975). With
that idea in mind, Kang, McDermott, and Roediger (2007)
theorized that, if retrieval is a factor, a more demanding
retrieval task should produce stronger testing effects than a
simpler task. They found that, as long as feedback was
offered during initial tests, short-answer tests improved
performance on a final test better than the multiple-choice
or control conditions. A similar effect was reported by
McDaniel, Anderson, Derbish, and Morrisette (2007), who
also found that a more demanding, short-answer test showed
the greatest learning improvement.
Results of Kang et al. (2007) support Bjork’s (1975) notion
that the act of retrieval may strengthen the memory trace.
However, they also point to the importance of feedback in
the testing effect, a topic that has been much studied in the
literature. Overall, empirical research has demonstrated that
feedback has a generally positive effect on learning outcomes
(e.g. Butler, Karpicke, & Roediger, 2007; Pashler, Cepeda,
Wixted, & Rohrer, 2005; Sassenrath & Gaverick, 1965). Feed-
back as an explanatory mechanism for the testing effect is very
relevant to the exploration of clicker effects because many
instructors offer immediate feedback to clicker responses
by projecting graphs of class polling results. It is important
to note that the testing effect has been demonstrated in many
experiments not employing feedback (Kang et al., 2007,
Experiment 1; Marsh, Agarwal, & Roediger, 2009; Roediger
& Karpicke, 2006a, 2006b), so regardless of the factors
contributing to feedback effects, some other mechanism
unique to testing appears to be working either alongside or
integrated with feedback.
GOALS OF THE PRESENT STUDY
The published literature on clicker learning effects is troubled
by methodological issues that impede clear understanding of
the technology’s effect on learning and memory. Thus, the first
goal of the present study was to employ a methodology that
controlled subject and item differences. Toward this end, a
series of clicker questions were written for targeted exam
questions that were offered in two college-level clicker-based
classrooms, which provided ecologically valid conditions
under which to examine clicker effects. Half the questions
served as control items in one class and as clicker-targeted
items in the other. In this way, all subjects and all items served
in both conditions, thus eliminating any error introduced by
possible item and subject differences. It is important to note
that the present study was not designed as a general investiga-
tion of ‘clickers versus no clickers’ in the classroom. Rather, it
was aimed at examining the effect of clicker questions on
acquisition of the specific information they target within a
clicker classroom. If clicker effects stem from a general effect
of questioning in class, there should be no difference between
clicker and control conditions in the present study. In this way,
the present study is a strong test of clicker effects, as the
within-subjects design biases the results against the study’s
main hypothesis if clicker effects are general rather than
specific to the targeted information.
In addition to exploring the learning effects of clickers, a
second aim of the experiment was to rule out the possibility
that clickers work by alerting students to the content of
future exam questions. Thus, we also compared performance
on test items targeted with clicker questions with perfor-
mance on the same items when students were told the
information would be on the test. If clickers work by
invoking the testing effect rather than alerting students to
important information, performance on clicker-targeted exam
questions should be equal to or better than performance
on the same items when attention ‘flags’ are given. If the
attention-grabbing hypothesis can be ruled out, the testing
effect will be the most reasonable explanation for clicker
effects. If cognitive change due to the testing effect can be
identified as the source of clicker effects on test performance,
it would mean that clicker technology offers a true learning
advantage rather than mere study prompts. Such a result
would be important to understanding the cognition
underlying clicker use and pedagogical practice.
METHOD
The experiment was designed to test two distinct hypotheses.
The first was that in-class clicker questions would have a
positive effect on students’ ability to remember factual
information and answer delayed exam questions on the same
topic. If Hypothesis 1 is correct, items targeted by clicker
questions will be answered correctly more often than when
they are not targeted by clicker questions. This result will
also serve as an important validity check of our methodology.
Because we used a within-subjects design, there is the possibil-
ity that the presence of clickers in the classroom will boost
performance on non-targeted items. A significant difference
between the clicker and simple control conditions will demon-
strate that the presence of the clickers did not contaminate the
control condition.
The second hypothesis was that clicker-mediated perfor-
mance improvement is due to directing students’ attention to
the relevant material, thus flagging certain information as
important and likely to be on the exams. Hypothesis 2 leads
to the prediction that targeting exam questions with alerts will
not increase exam performance more than targeting the same
items with clicker questions. If subjects are merely being
alerted to important topics by clicker questions, explicit alerts
should yield greater performance than the implied alerts
offered by clicker questions. In addition to testing these
Clicker use enhances memory 637
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
hypotheses with direct measures of learning outcomes, a
survey was given to probe students’ awareness of what helped
them remember class information and direct their study efforts.
Because the cognitive processes that underlie memory forma-
tion and trace strengthening are generally outside a learner’s
awareness, students should not be consciously aware of the
role of clicker use in their test performance if the testing
effect is at work.
Subjects
Participants were undergraduates at a state university in the
eastern United States, enrolled in one of two introductory
psychology classes taught by one of the experimenters. In
the first class, 131 students were enrolled, 47% of which
were men and 72%, 26%, 1%, and 1% were spread across
the freshman, sophomore, junior, and senior classes, respec-
tively. In the second class, 49% of the 200 students enrolled
were men, with 61%, 5%, 32%, and 2% of the students
spread across each respective grade level. In both classes,
students generally ranged in age from 18–25 years. Students
participated in the study as part of their normal coursework,
earning points equal to roughly 14% of the final grade by
correctly answering in-class questions. Institutional review
board approval was sought prior to beginning the study,
and a waiver was granted.
Materials
The class covered 11 topics in general psychology, with a
chapter assigned for each in Discovering Psychology
(Hockenbury & Hockenbury, 2007). The class met 3 days a
week for 50 minutes over 15 weeks and was taught as a
typical lecture course with some videos, interactive activities,
and participation integrated into many of the lectures. All
lectures were accompanied by a PowerPoint presentation that
projected main points and illustrations onto a large screen.
The slides were projected with an Apple MacBook Pro com-
puter and a digital projection system. In-class clicker questions
were integrated into the PowerPoint presentations, with
individual slides dedicated to single questions. The iClicker
system was used to allow students to make their responses to
clicker questions. Students were required to purchase their
clickers along with their textbooks. The iClicker Company
supplies the receiver and software at no cost to adopting
instructors.
The exams in this class were not cumulative, each covering
only the assigned material since the previous test. Four exam
items from each course topic (44 exam items), spread across
four different tests during the semester, were chosen as targets
for the experiment. Performance on these items was the
dependent variable. A multiple-choice clicker question was
written for each exam question, all of which were also multiple
choice. All clicker and exam questions used for the study were
factual, asking only about basic, declarative information
presented in class. Appendix A provides two sample clicker–
exam question pairs. All targeted exam questions were
included on the exams for each of the classes participating in
the study.
Two independent content experts provided validation
ratings of the stimuli. Both were professors of psychology
that routinely taught introductory psychology. They were
presented with each clicker and exam question and asked
to rate them on a 7-point scale for the following dimensions:
(i) overall quality of the question, (ii) relevance of the
information targeted by the clicker–exam item pairs to the
content and goals of an introductory psychology course;
and (iii) the relationship between each clicker item and each
exam question. For each index, higher ratings indicated
better-quality questions, greater relevance to the course
aims, and a greater relationship between clicker and exam
items, respectively.
A cutoff mean of 4.5 was set for the quality and relevance
scores. Any question or clicker–exam question pair that did
not achieve a mean rating of 4.5 on all these dimensions
was not used in the study. The mean overall quality rating
for the clicker and exam questions used in the study was
6.11 and 6.09, respectively. The range of mean scores was
5.0–6.5 for the clicker questions and 5.5–7.0 for the exam
questions. The mean relevance of the material to the course
was 6.36, with a range of 4.5–7.0.
To establish the strength of the relationship between
clicker and exam question pairs, the raters were asked to
indicate the extent to which correctly answering each clicker
question required retrieval of the same information from
memory as each exam question. This was performed for
two reasons. The first was to validate each clicker question
as a reasonable test of the same knowledge as its intended
exam target. Thus, a high rating established that the clicker
questions were directly accessing the memory relevant to
their respective exam questions. The second reason was to
ensure that there were no ‘spillover effects’ of clicker items
to exam items for which they were not intended. Toward that
end, each clicker question was also evaluated against each of
the other exam items used as the study’s dependent measure.
Low ratings between each of the clicker questions and the
experimental exam questions for which they were not
intended indicate low likelihood that clicker questions would
affect performance on test questions other than those for
which they were intended. Establishing control of the
independent variable in this way is important, as such
spillover effects could contaminate other items or conditions.
If the information required to answer each question in a
clicker–exam pair was identical, they were asked to rate it
with a 7. If the information was unrelated and easily separa-
ble, they were asked to rate it with a 1. Ratings from 2–6
indicated commensurate degrees of relatedness.
For the clicker–exam item relatedness scores, pairs of items
that did not achieve a mean rating of 4.5 were not used in the
study. Likewise, any exam question that was rated with a
relatedness score higher than 3 with any clicker question for
which it was not intended was not included in the study. The
mean rating of the intended clicker–exam question pairs was
6.80 (with a mean range of 5–7), and the mean rating between
unintended pairs was 1 (with all ratings at 1).
Procedure and experimental design
The basic information needed to correctly answer each of
the 40 targeted exam questions was presented during class
lectures, with the information printed on a projected slide
638 A. M. Shapiro and L. T. Gordon
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
at an appropriate time during lecture. Clicker questions were
offered during lecture at varying time intervals that were not
predictable to students. They were offered after a topic was
covered and only after the instructor both solicited and
answered any questions from the class. Anywhere from
one to five clicker questions were asked on any given day
in class. Some of these were not experimental clicker items
but were used as ‘filler’ questions to provide sufficient credit
for students. The percent of correctly answered clicker
questions over the course of the semester was calculated as
roughly 14% of the final grade.
One set of 20 items was chosen to test Hypothesis 1,
regarding the learning effects of clickers. A separate set
of 20 items was chosen to address Hypothesis 2, regarding
the cognition underlying clicker effects. Regardless of which
hypothesis was being tested, the clicker questions were
offered in the same way, as previously described. Within each
subset of 20 clicker–exam question pairs created for the
experiment, 10 were assigned to the clicker condition in one
class and to the control condition in the other class. The oppo-
site assignment was made for the other 10 items. Thus, each
subject and item contributed equally to both conditions.
Whereas the procedure for presenting clicker questions
was identical across item sets used to test each hypothesis,
the procedure used to create the control conditions differed.
In the case of Hypothesis 1, the information relevant to the
control item was simply presented as part of the class lecture,
with the information included on a PowerPoint slide. Thus,
the conditions were merely set up to compare learning when
clickers are used or not. For Hypothesis 1, then, the condi-
tions will be referred to as the clicker1 and simple control
conditions. For the second hypothesis, the experimental
and no-clicker control conditions will be referred to as the
clicker2 and attention-grabbing conditions, respectively.
When the information necessary to answer an exam question
targeted as an attention-grabbing item was presented in class,
it was highlighted on the projected slide. The instructor’s
remote was used to turn the font red and pulse the text. In
addition, the instructor announced, ‘This information is very
important. It is likely to be on your test.’ These attention-
grabbing ‘flags’ were offered either just before or during
the presentation of the relevant information.
Students were allowed 40–90 seconds to answer each
question, depending upon how long the question was. When
a question was projected, a timer also appeared on the
screen, thus making students aware of the time limit. After
students had submitted their responses, a bar chart showing
the percentage of the class to respond with each option was
projected onto the screen, and the instructor highlighted the
correct answer in red by clicking on the bar. In this way,
students received feedback about their responses to each
question. If less than 90% of the class correctly answered
an item, the instructor explained the correct answer, whether
students posed questions or not. On all but a few of the
clicker items used in the study, however, students scored
90% or higher and asked no questions after seeing the
correct answer.
An in-class survey was also given to students 1 week
before the end of the semester. The survey was designed to
solicit students’ conscious impressions of factors affecting
their memory and study strategies. The survey was adminis-
tered by projecting the questions onto the screen during
class. Each question was projected individually, and the
instructor read each aloud. Students were asked to indicate
a response to each question using a 5-point Likert scale with
their clickers. Students were given 15 seconds to respond to
each question. Specifically, students were asked how much
the in-class questions, the highlighted information on the
PowerPoint slides, and instructor emphasis affected their
choices about what to study. They were also asked to rate
how much each of those factors enhanced their learning
and memory of class material. None of the class results
were projected to the class or reported to them before the
last test.
RESULTS AND DISCUSSION
Students who attended fewer than 60% of the classes over
the semester were excluded from the analysis, as their
exposure to the independent variable was considered too
low to reflect accurately the effect of the intervention. Like-
wise, students that missed more than one exam were also
excluded, as these students were missing at least half the
data. A total of 226 subjects were included in the analysis.
As a check of the equivalence of the independent and
dependent variables used to test the two hypotheses, a paired
t-test was conducted to compare subjects’ performance on
the 20 exam questions used to test Hypothesis 1 when
assigned to the clicker1 condition with the second set of 20
used to test Hypothesis 2 when assigned to the clicker2
condition. Students scored a mean of 69.8 (SD = 17.9) on
the clicker1 items and 72.1 (SD = 17.8) on the clicker2
items. The difference was non-significant in a paired t-test,
t(225) = 1.62, p > .05. An unpaired t-test was conducted to
compare means when calculated by items. Those in the
clicker1 condition were correctly answered by a mean of
68.6% (SD = 17.8) of students, and those in the clicker2
condition by 72.5% (SD = 17.8) of students. The difference
was non-significant, t(38) = 0.74, p > .05.
As proponents of item response theory have shown,
learner characteristics cannot be separated from test charac-
teristics, as they interact to determine exam performance
(Baker, 2001; Van der Linden & Hambleton, 1997). To
better account for variability among subjects and test items,
comparisons between control and experimental groups were
conducted by both subjects and items. According to item
response theory, an individual’s latent traits such as intelli-
gence and motivation and item factors such as difficulty
and discrimination contribute to overall test results. Averag-
ing across both subjects and items offers some degree of
assurance that the outcome analysis is not unduly influenced
by one set of characteristics. That is, agreement between
subject and item analyses offers stronger confirmation of
an effect than one analysis alone.
Hypothesis 1: Clicker questions improve learning
Students answered a mean of 61.4% (SD = 17.8) of the
targeted exam questions correctly when they were not
given in-class clicker questions on the relevant content, as
Clicker use enhances memory 639
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
compared with 69.8% (SD = 17.9) when the same items were
included in the clicker condition. The 8.4-point difference
between means represents a 13.7% improvement on exam
questions from control to clicker conditions, and the differ-
ence between a traditional letter grade of D� versus C�.
The difference was significant when analyzed by subjects,
t(225) = 5.78, p < .001, d = 0.38. The difference was also
significant when analyzed by items, t(19) = 3.46, p < .01,
d = 0.77. For exam items in the no-clicker control condition,
a mean of 62.2% of students answered correctly. When the
same items were used in the clicker condition, a mean of
68.6% of students answered correctly. The 6.4-point differ-
ence represents a 10.3% performance increase on exam
items when in-class clicker questions were asked about
relevant content.
These results strongly support the conclusion that asking
students factual, multiple-choice questions enhances mem-
ory for the relevant information on delayed, factual test
questions. They suggest that the technology may be taking
advantage of the testing effect in the classroom. The magni-
tude of the observed effect is not unprecedented, as prior
studies have shown that a single testing episode in advance
of a final test has been shown to enhance learning by even
greater amounts (see Roediger & Karpicke, 2006b for a
review). As any reasonable critic would rightly point out,
however, it may be the case that clicker questions do not
strengthen memory traces or connections leading to them.
Rather than affecting true cognitive change, the questions
may merely cue students that the instructor deems certain
pieces of information to be of particular importance. If so,
it would certainly be reasonable for students to focus more
on that information during study, thus augmenting perfor-
mance on test items targeting that information. Analysis of
the second stimulus set and the survey results addresses
that issue.
Hypothesis 2: Clicker questions improve learning by
alerting students to important material
The attention-grabbing hypothesis was not supported by
the comparison of the clicker2 and attention-grabbing condi-
tions. When information was highlighted on class slides and
students were told it was important and would be included
on the test (the attention-grabbing condition), students
correctly answered an average of 70.1% (SD = 17.8) of
the targeted exam questions. When they were not told the
material was of particular importance but were given clicker
questions about the material (the clicker2 condition), they
correctly answered 72.1% (SD = 17.2). The difference was
not statistically significant, t(225) = 1.33, p > .05. Analyzed
by items, an average of 68.7% students correctly answered
targeted exam questions when they were in the attention-
grabbing condition and 72.5% correctly answered the same
items when assigned to the clicker2 condition. The difference
just reached significance and had a medium effect size,
t(19) = 2.06, p = .05, d = 0.46. In short, offering a clicker
question improved performance on delayed exam questions
as well or better than explicitly telling students that the
information would be on the test.
Class survey
Unpaired t-tests comparing class responses to the survey
questions indicated no significant differences between clas-
ses with respect to how they answered any of the survey
questions. As such, all of the data for both classes were
combined for the analysis. To elicit students’ can did
responses, the surveys were anonymous. As such, it was not
possible to identify the students that attended fewer than 60%
of classes or missed more than one test. Thus, the survey results
represent the entire class, rather than the subset of students
used for the study.
A repeated-measures analysis of variance with a Green-
house–Geisser correction comparing students’ responses
with the questions probing how much the clicker questions,
professor emphasis, and slide emphasis helped them to
learn the material was significant, F(1.77, 476.38) = 68.409,
p < .001, �2Partial = .20. Students reported that answering the
clicker questions was slightly less than moderately helpful
in learning the material, as the average rating was 2.84 on
a 1–5 scale. The means were 3.68 and 3.39 for professor
and slide emphasis, respectively. Pairwise comparisons
using the Bonferroni correction indicated that students felt
that the clicker questions had significantly less impact
on learning class material than both the slide emphasis
(p < .01) and the instructor’s verbal remarks (p < .01). The
difference between slide and instructor emphasis was also
significant, p < .01.
Survey questions also probed students for information
about what guided their decisions about what to study. If
clicker questions were effective because they drew students’
attention to material to be tested, one would expect that
students would have used that information to direct their
study efforts. Student responses, however, do not indicate
that clicker questions were highly influential, as they rated
their impact on study choices with a moderate mean of
3.04. Students rated the professor’s verbal remarks and
highlighted information on the slides much higher (4.28 and
3.86, respectively) than the clicker questions. A repeated-
measures analysis of variance with a Greenhouse–Geisser
correction indicated that the differences were significant,
F(1.76, 481.34) = 184.012, p < .001, �2Partial = .40. Again,
pair-
wise comparisons using the Bonferroni correction indicated
significant differences between ratings for the clicker and slide
emphasis, clicker and instructor emphasis, and slide and
instructor emphasis, all at p < .01.
In sum, the results of the Hypothesis 1 analysis demon-
strate that clicker technology is an effective classroom
learning tool. Performance on delayed, targeted exam ques-
tions increased significantly when the information was tested
in class shortly after learning the material. The test of
Hypothesis 2 demonstrated that clicker questions were
equally or even more effective than cues about the content
of future exams. Although the magnitude of the clicker effect
is not great enough to rule out attention grabbing as a factor
in clicker effects, attention grabbing does not fully account
for clicker effects. The clicker results support the conclusion
that the testing effect seems to be working in tandem with
attention grabbing to produce the clicker effects established
in the test of Hypothesis 1. Although clicker questions may
serve to guide some students about what to study, it is clear
640 A. M. Shapiro and L. T. Gordon
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
that more is at work than increased study of clicker-targeted
materials. After all, students rated the clicker questions as
less influential in guiding their study efforts than actually
telling them what would be on the exam. Moreover, one
would not expect students to be consciously aware of
in-class questions augmenting memory because the testing
effect stems from unconscious cognitive processes (we do
not have conscious access to the cognitive processes
underlying memory construction or consolidation). That
prediction is born out by the relatively low ratings of clicker
questions as memory enhancers.
GENERAL DISCUSSION AND CONCLUSIONS
One purpose of the present investigation was to document
the positive effects of classroom clicker use on learning by
employing a methodology that addresses the shortcomings
of some prior studies. Specifically, by using a within-items
and within-subjects design, while still conducting the study
in live classrooms, the experiment was designed to tighten
experimental control while maximizing ecological validity.
Because the design was within subjects and students were
using clickers in the same lectures in which they were
exposed to the control condition content, the significant
performance difference between clicker and control items
indicates a strong effect of clicker questions on targeted
information acquisition. The second goal was to provide
evidence about the cognition underlying clicker-assisted
learning effects. The experiment demonstrated that clickers
are effective pedagogical tools. Performance on delayed
exam questions increased significantly when the information
was targeted by in-class clicker questions. It also revealed
that clicker questions were equally or more effective than
cuing students about the information being on a future exam.
The results support a role of the testing effect in clicker-
assisted learning; however, the equivalent performance of
the clicker and attention-grabbing groups in the subject
analysis of Hypothesis 2 does not completely rule out the
role of attention grabbing in clicker effects. It is likely that
the testing effect is working in tandem with attention grab-
bing and perhaps some increased study of clicker-targeted
information. The data trend seen in the means of that
analysis, however, is in a direction opposite to what the
attention-grabbing hypothesis predicts. Moreover, the analy-
sis by items indicated a significant advantage of clicker
questions over alerts, with a moderate effect size, although
the survey results indicated students actually studied the
information in the alert condition more than the clicker
condition. The latter point is remarkable because it reveals
that students performed better on the very questions they
reported attending to less during study (i.e. the clicker-
targeted items). Further, the clicker questions were only
offered after information was presented in class, so they
could not have served to increase attention during lecture.
The attention alerts, however, were often given before or in
the middle of explanations, so attention actually should have
been greater in the attention condition. On balance, the
weight of evidence cannot rule out a role of attention
grabbing in clicker effects, so the ability of clicker questions
to ‘flag’ information should be further explored in future
studies.
To whatever degree attention grabbing is at play in clicker
effects, there seems to be something about the actual act of
answering clicker questions (apart from attention grabbing)
that enhances memory for lecture content. One possible
mechanism through which answering clicker questions may
enhance memory for class material is repetition. That is,
clicker questions may merely offer multiple exposures to
the information. After the information is provided in class,
the clicker questions serve as a second exposure, thus
enhancing the strength of memory for the material. However,
the magnitude of the improvement seen in the clicker1 versus
simple control analysis (10–13%) is hard to explain by a
single re-exposure to the material during class. Perhaps
students studied clicker-targeted material more, thus increas-
ing exposure to the material outside of class. If so, the results
might be attributable to repetition effects, after all. The
survey results, however, indicated that the alerts were more
influential than clicker questions in directing students’ study
efforts. Given that students’ self-reports indicate that they
spent significantly more time studying the information that
was highlighted in class than the information targeted by
the clicker questions, one would expect greater repetition
and learning in the attention-grabbing condition as opposed
to the clicker2 condition. Because performance on items
assigned to the clicker2 condition was better than that on
items assigned to the attention condition, that possibility is
not supported by the data. Although the present study was
not designed to specifically rule out repetition effects, it does
offer indirect evidence against repetition as the power behind
clicker effects.
Because the present data make repetition effects unlikely
as a source of clicker effects, the most likely explanation is
that the testing effect is at work. The mechanism underlying
the testing effect has been researched at length, with
evidence reported in support of feedback (Butler et al., 2007),
transfer-appropriate processing (e.g. Nungester & Duchastel,
1982), and trace strengthening (e.g. Kang et al., 2007). The
present study was not designed to distinguish between these
possibilities. However, it is logical to conclude that the
feedback students receive about their performance was useful
in either reinforcing or correcting recently learned information.
Because students are often poor judges of their own memory
and learning (Bjork, 1999; Koriat, 1993; Koriat & Bjork,
2005), they often confuse familiarity with robust memory. That
is, students who spend time re-reading the text or ‘going over’
their notes often lack the metacognitive skills needed to
separate the subjective feeling of familiarity gained from this
type of lax ‘studying’ from true knowledge. However, clicker
questions challenge students to retrieve recently learned
information, thus providing unambiguous feedback about their
understanding, which may be a factor in their effectiveness.
Alternatively, transfer-appropriate processing is another viable
explanation for the present results, as the clicker and exam
questions were all offered in the same format. Of course,
whether each of these interpretations is valid is a question to
be explored in future studies directed at distinguishing between
all these likely mechanisms during clicker use.
Clicker use enhances memory 641
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
One limitation of the study is that the measure of students’
study emphasis was a self-report, which is less reliable than a
direct measure. Although future studies may examine that
variable using a different methodology, the narrow focus of
the present work was to control as much error as possible
in the sample and in the stimuli to determine whether clicker
questions enhance retention of targeted material. The present
design offers a rigorous test of that hypothesis. Also, it
would have been ideal to use the same items to test each
hypothesis and fully counterbalance them between the
clicker, control, and attention-grabbing conditions. It is a
limitation of the study that separate items were used to test
each hypothesis, thus preventing direct comparisons between
their respective items. The decision was made to create
separate stimulus sets for each hypothesis because there were
only two classes available for the study. As such, it was not
possible to fully counterbalance test items between all three
conditions (clicker, no clicker, and attention grabbing), and
the tight control attained through full counterbalancing was a
crucial methodological issue in this experiment. The current
design, however, still allowed the important comparisons
necessary to address Hypotheses 1 and 2. The only compari-
son that could not be made while simultaneously controlling
item differences was between the simple control and atten-
tion-grabbing items. Because that comparison would not
inform the aims of the study, it was seen as a reasonable
compromise. The differences between the attention and simple
control groups were, in fact, rather robust in the subject
analysis and in the predicted direction in both analyses
(70.1% vs 61.4% in the subject analysis and 68.7% vs 62.2%
in the item analysis, respectively), suggesting that the
attention-grabbing manipulation was indeed effective at
promoting attention and study of certain facts. The demon-
strated equivalence between questions used in the clicker1
and clicker2 conditions supports the validity of the differences
between the attention and simple control groups and thus the
validity of the attention-grabbing manipulation.
Another limitation was the narrow focus of the investiga-
tion necessitated by the within-subjects design, as no clicker-
free control condition could be included in the study.
Without a comparison group that used no clickers at all,
the present results cannot determine whether the benefits of
clickers also extended to some degree to the untargeted test
questions. It is certainly possible that untargeted question
performance was also boosted by clicker use, but just to a
lesser extent than the targeted questions. Indeed, some
studies have shown an effect of clicker use on untargeted
material (e.g. Mayer et al., 2009). Finally, the present study
examined only one aspect of learning, fact retention. It did
not examine the effect of clicker questions on the develop-
ment of conceptual understanding, problem solving, critical
thinking, or other aspects of learning. It will be important
for future studies to weigh the benefits of clickers in
those areas.
From the point of view of practice, the data offer encourag-
ing news to educators, particularly those teaching large groups
of students. The data suggest that although some attention
grabbing may contribute to the observed benefits of clickers,
the questions are also affecting real cognitive change in the
classroom, thus offering real learning advantage to students.
With teacher investment of just a few minutes to incorporate
a clicker question into a presentation and a minute or so of
class time to present, class performance on delayed exam items
can be significantly and meaningfully increased. In the present
study, the clicker questions were associated with a perfor-
mance increase of roughly 10–13%, which seems to be a good
return on investment. The technology has its limits, as only so
many questions can reasonably be asked in a single class
meeting, but the evidence strongly suggests that clickers are
a profitable investment for teachers and students.
REFERENCES
Agarwal, P. K., Karpicke, J. D., Kang, S. K., Roediger, H. L., &
McDermott, K. B. (2008). Examining the testing effect with
open- and
closed-book tests. Applied Cognitive Psychology, 22, 861–876.
DOI:10.1002/acp.1391
Allen, G. A., Mahler, W. A., & Estes, W. K. (1969). Effects of
recall tests on
long-term retention of paired associates. Journal of Verbal
Learning &
Verbal Behavior, 8(4), 463–470. DOI:10.1016/S0022-
5371(69)80090-3
Baker, F. (2001). The basics of item response theory. College
Park, MD: ERIC
Clearinghouse on Assessment and Evaluation, University of
Maryland.
Beekes, W. (2006). The “millionaire” method for encouraging
participation.
Active Learning in Higher Education: The Journal of the
Institute for
Learning and Teaching, 7, 25–36.
Bjork, R. A. (1975). Retrieval as a memory modifier: An
interpretation of
negative recency and related phenomena. In R. L. Solso (Ed.),
Information
processing and cognition: The Loyola symposium (pp. 123–
144). Hillsdale,
NJ: Lawrence Erlbaum Associates, Inc.
Bjork, R. A. (1999). Assessing our own competence: Heuristics
and illusions.
In D. Gopher, & A. Koriat (Eds.), Attention and performance
XVII:
Cognitive regulation of performance: Interaction of theory and
application
(pp. 435–459). Cambridge, MA: MIT Press.
Blaxton, T. A. (1989). Investigating dissociations among
memory measures:
Support for a transfer appropriate processing framework.
Journal of
Experimental Psychology: Learning, Memory, and Cognition,
10, 3–9.
DOI: 10.1037/0278-7393.15.4.657
Brickman, P. (2006). The case of the druid Dracula: A directed
“clicker”
case study on DNA fingerprinting. Journal of College Science
Teaching,
36(2), 48–53.
Butler, A. E., Karpicke, J. D., & Roediger, H. L. (2007). The
effect of type
and timing of feedback on learning from multiple-choice tests.
Journal of
Experimental Psychology. Applied, 13, 273–281. DOI:
10.1037/1076-
898X.13.4.273
Carrier, M., & Pashler, H. (1992). The influence of retrieval on
retention.
Memory & Cognition, 20(6), 633–642.
Cleary, A. (2008). Using wireless response systems to replicate
behavioral
research findings in the classroom. Teaching of Psychology, 35,
42–44.
DOI: 10.1080/00986280701826642
Draper, S., & Brown, M. (2004). Increasing interactivity in
lectures using an
electronic voting system. Journal of Computer Assisted
Learning, 20,
81–94. DOI: 10.1111/j.1365-2729.2004.00074.x
Duchastel, P. C. (1981). Retention of prose following testing
with different
types of tests. Contemporary Educational Psychology, 6, 217–
226. DOI:
10.1016/0361-476X(81)90002-3
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K.
A., Hendel,
R. A., Epstein, B. B., & Brosvic, G. M. (2002). Immediate
feedback
assessment technique promotes learning and corrects inaccurate
first
responses. Psychological Record, 52(2), 187–201.
Glover, J. A. (1989). The “testing” phenomenon: Not gone but
nearly
forgotten. Journal of Educational Psychology, 81, 392–299.
DOI:
10.1037/0022-0663.81.3.392
Hatch, J., Jensen, M., & Moore, R. (2005). Manna from heaven
or “clickers”
from hell. Journal of College Science Teaching, 34(7), 36–39.
Herried, C. (2006). “Clicker” cases: Introducing case study
teaching into
large classrooms. Journal of College Science Teaching, 36(2),
43–47.
Hockenbury, D. H., & Hockenbury, S. E. (2007). Discovering
psychology
(4th ed). New York, NY: Worth Publishers, Inc.
642 A. M. Shapiro and L. T. Gordon
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
Jacoby, L. L. (1978). On interpreting the effects of repetitions:
Solving a
problem versus remembering a solution. Journal of Verbal
Learning and
Verbal Behavior, 17, 649–667. DOI: 10.1016/S0022-
5371(78)90393-6
Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007).
Test format
and corrective feedback modulate the effect of testing on
memory retention.
European Journal of Cognitive Psychology, 19, 528–558.
Karpicke, J. D., & Roediger, H. L. (2007a). Repeated retrieval
during learning
is the key to long-term retention. Journal of Memory and
Language, 57,
151–162. DOI: 10.1016/j.jml.2006.09.004
Karpicke, J. D., & Roediger, H. L. (2007b). Expanding retrieval
practice pro-
motes short-term retention, but equally spaced retrieval
enhances long-term
retention. Journal of Experimental Psychology: Learning,
Memory, and
Cognition, 33, 704–719. DOI: 10.1037/0278-7393.33.4.704
Karpicke, J. D., & Roediger, H. L. (2008). The critical
importance of retrieval
for learning. Science, 319, 966–968. DOI:
10.1126/science.1152408
Kennedy, G. E., & Cutts, Q. I. (2005). The association between
students’
use of an electronic voting system and their learning outcomes.
Journal
of Computer Assisted Learning, 21, 260–268. DOI:
10.1111/j.1365-
2729.2005.00133.x
Koriat, A. (1993). How do we know that we know? The
accessibility model
of the feeling of knowing. Psychological Review, 100, 609–639.
DOI:
10.1037/0033-295X.100.4.609
Koriat, A., & Bjork, R. A. (2005). Illusions of competence in
monitoring one’s
knowledge during study. Journal of Experimental Psychology:
Learning,
Memory, and Cognition, 31, 187–194. DOI: 10.1037/0278-
7393.31.2.187
Marsh, E. J., Agarwal, P. K., & Roediger, H. L. (2009).
Memorial consequences
of answering SAT II questions. Journal of Experimental
Psychology.
Applied, 15, 1–11. DOI: 10.1037/a0014721
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B.,
Chun, D.,
Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009).
Clickers in
college classrooms: Fostering learning with questioning
methods in large
lecture classes. Contemporary Educational Psychology, 34, 51–
57. DOI:
10.1016/j.cedpsych.2008.04.002
McDaniel, M., Anderson, J., Derbish, M., & Morrisette, N.
(2007). Testing the
testing effect in the classroom. European Journal of Cognitive
Psychology,
19, 494–513. DOI: 10.1080/09541440701326154
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. (2008).
Efficacy of per-
sonal response systems (“clickers”) in large, introductory
psychology classes.
Teaching of Psychology, 35, 45–50. DOI:
10.1080/00986280701818516
Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of
processing
versus transfer appropriate processing. Journal of Verbal
Learning and
Verbal Behavior, 16, 519–533. DOI: 10.1016/S0022-
5371(77)80016-9
Nungester, R. J., & Duchastel, P. C. (1982). Testing versus
review: Effects
on retention. Journal of Educational Psychology, 74, 18–22.
DOI:
10.1037/0022-0663.74.1.18
Pashler, H., Cepeda, N. J., Wixted, J. T., & Rohrer, D. (2005).
When
does feedback facilitate learning of words? Journal of
Experimental
Psychology: Learning, Memory, and Cognition, 31(1), 3–8.
Poirier, C. R., & Feldman, R. S. (2007). Promoting active
learning using
individual response technology in large introductory psychology
classes.
Teaching of Psychology, 34(3), 194–196.
Raudenbush, S. W. (1997). Statistical analysis and optimal
design for cluster
randomized trials. Psychological Methods, 2, 173–185. DOI:
10.1037/
1082-989X.2.2.173
Ribbens, E. (2007). Why I like clicker personal response
systems. Journal of
College Science Teaching, 37(2), 60–62.
Roediger, H. L., & Karpicke, J. D. (2006a). Test-enhanced
learning: Taking
memory tests improves long-term retention. Psychological
Science, 17,
249–255. DOI: 10.1111/j.1467-9280.2006.01693.x
Roediger, H. L., & Karpicke, J. D. (2006b). The power of
testing memory: Basic
research and implications for educational practice. Perspectives
on Psycho-
logical Science, 1, 181–210. DOI: 10.1111/j.1745-
6916.2006.00012.x
Roediger,H., Agarwal, P., McDaniel, M., & McDermott, K.
(2011). Test-enhanced
learning in the classroom: Long-term improvements from
quizzing. Journal of
Experimental Psychology. Applied, 17, 382–395. DOI:
10.1037/a0026252
Sassenrath, J., & Garverick, C. (1965). Effects of differential
feedback
from examinations on retention and transfer. Journal of
Educational
Psychology, 56, 259–263.
Shapiro, A. M. (2009). An empirical study of personal response
technology
for improving attendance and learning in a large class. Journal
of the
Scholarship of Teaching and Learning, 9(1), 13–26.
Shih, M., Rogers, R., Hart, D., Phillis, R., & Lavoie, N. (2008).
Community
of practice: The use of personal response system technology in
large
lectures. Paper presented at the University of Massachusetts
Conference
on Information Technology, Boxborough, MA.
Stowell, J., & Nelson, J. (2007). Benefits of electronic audience
response
systems on student participation, learning, and emotion.
Teaching of
Psychology, 34, 253–258. DOI: 10.1080/00986280701700391
Stowell, J. R., Oldham, T., & Bennett, D. (2010). Using student
response
systems (“clickers”) to combat conformity and shyness.
Teaching of
Psychology, 37, 135–140. DOI: 10.1080/00986281003626631
Szpunar, K. K., McDermott, K. B., & Roediger, H. L. (2008).
Testing during
study insulates against the buildup of proactive interference.
Journal of
Experimental Psychology: Learning, Memory, and Cognition,
34, 1392–1399.
DOI: 10.1037/a0013082
Trees, A., & Jackson, M. (2007). The learning environment in
clicker
classrooms: Students processes of learning and involvement in
large
university-level courses using student response systems.
Learning, Media
and Technology, 32, 21–40. DOI: 10.1080/17439880601141179
Tulving, E. (1967). The effects of presentation and recall of
material in free-
recall verbal learning. Journal of Verbal Learning and Verbal
Behavior,
6, 175–184. DOI: 10.1016/S0022-5371(67)80092-6
Van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997).
Handbook of
modern item response theory. New York: Springer.
APPENDIX A
SAMPLE CLICKER–EXAM QUESTION PAIRS.
Clicker question Exam question
Sample 1. Which of the following
is true about punishment?
A. Punishment is most effective if
it always immediately follows
the behavior.
B. Punishment works by reducing
an undesired behavior.
C. Punishment can be ineffective
if a big enough reward can be had
by producing the behavior in
question.
D. All of the above.
Punishment is most effective
if:A. it immediately precedes
the operant.
B. it consistently follows the
operant.
C. it occasionally follows the
operant.
D. there is considerable delay
between the operant and the
punishment.
Sample 2. The major difference
between a primary and secondary
reinforcer is that primary
reinforcers are naturally satisfying
while a secondary reinforcer
A. is something we learn to like.
B. is usually an indirect form of a
primary reinforcer.
C. Both A and B
D. None of the above.
Whereas a primary reinforcer
derives its reinforcing value
_____, conditioned reinforcers
derive their reinforcing value
_____.
A. from conditioned reinforcers,
from primary reinforcers
B. naturally, from primary
reinforcers
C. from conditioned reinforcers,
naturally
D. naturally, from conditioned
stimuli
Clicker use enhances memory 643
Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
Psychol. 26: 635–643 (2012)
Copyright of Applied Cognitive Psychology is the property of
John Wiley & Sons, Inc. and its content may not
be copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written
permission. However, users may print, download, or email
articles for individual use.
Name (First/Last):
Highlight All Answers
Step 1 - Book Search
1. [Instructions] Using Woodbury Library Catalog
(library.woodbury.edu), search for the book assigned to you.
Above you will see a number to the left of your name, locate
that number on the Excel spreadsheet in Moodle to find your
assigned book. Once you have found your book in the Catalog,
select the dropdown menu from “Libraries to search” and select
Woodbury University Library. Once you have located (found)
the book in the Catalog, click on the title of the book.
2. Name of the book (Once upon a car by Vlasic).
3. Please answer the following questions:
a. What is the full title of the book you found in the Catalog:
b. Names of author(s)/editor(s):
c. Publisher:
d. Copyright (year published):
e. Number of Pages:
f. Place of publication (city/state):
g. What is the OCLC Number:
h. How many (what is the number of) related Subject Words that
have been applied to this item (you can find the number of
subject words at the bottom of the record):
i. What is the Location of this book:
j. What is the Status of the book:
k. What is the full Call Number of the book (example: ND511.5
.K55 A618 2012):
4. Create a proper APA citation:
5. Go to the shelf and locate your assigned book.
a. Take a photo of the front cover of the book
b. Take a photo of the table of contents
c. NOTE: Attached both images to this document (NO images
from the Internet are allowed!). Shrink the images and place
both images under HERE (DO NOT PUT THEM AT THE END)
Step 2 - Library of Congress Classification
1. [Instructions] Using Google, search for: Library of Congress
Classification outline. The first result will be the link you will
want to select. Click on the link and you will be redirected to
the Library of Congress Classification Outline webpage.
2. Using the call number of you book (which you have
identified in Step 1, 2.k) answer the following:
a. Name of the Class (example K = Law):
b. Name of the Subclass (KZ = Law of nations):
3. Under the Subclass:
a. What is the call number range for your book (Example:
KZ170-173):
b. What is that section or call number range called (Example:
Annuals):
c. Does this properly describe your book? (Make sure you look
at the book on the shelf in the library to answer this question):
i. Yes/No:
ii. Why?:
Step 3 – Searching Your Topic
1. What is your Major (example: History):
2. Write out two search words using the word “AND” to find a
book in your major (example: economy and students):
3. Using the Woodbury Library catalog, enter your two search
words (using the word AND) and run a search.
a. How many results did you retrieve from Libraries Worldwide:
b. How many results did you retrieve from Woodbury
University Libraries:
c. How many results did you retrieve from Burbank (if you have
zero results, change your search words to find a result):
4. Follow the instructions carefully: 1st: On the left side of the
screen, click on the link called “Print Book” under Format, 2nd:
Select the second title listed in the results. Now answer the
following questions:
a. What is the full title of the book you found in the Catalog:
b. Names of author(s)/editor(s):
c. Publisher:
d. Copyright (year published):
e. Number of Pages:
f. Place of publication (city/state):
g. What is the OCLC Number:
h. How many (what is the number of) related Subject Words that
have been applied to this item (you can find the number of
subject words at the bottom of the record):
i. What is the Location of this book:
j. What is the Status of the book:
k. What is the full Call Number of the book (example: ND511.5
.K55 A618 2012):
5. Create a proper APA citation:
6. Go to the shelf and locate your assigned book.
a. Take a photo of the front cover of the book
b. Take a photo of the table of contents
c. NOTE: Attached both images to this document. Shrink the
images so they can fit on one page. NO images from the Internet
are allowed!
1
Updated: Tuesday, October 04, 2016

More Related Content

Similar to [Type text][Type text][Type text]The Fif.docx

MGMT 3700 Best Practices in Diversity Leveraging Differe.docx
MGMT 3700 Best Practices in Diversity Leveraging Differe.docxMGMT 3700 Best Practices in Diversity Leveraging Differe.docx
MGMT 3700 Best Practices in Diversity Leveraging Differe.docxendawalling
 
Making ethical decisions
Making ethical decisionsMaking ethical decisions
Making ethical decisionsrstrickland6
 
changemanagementinorganisations-090903073821-phpapp01.ppt
changemanagementinorganisations-090903073821-phpapp01.pptchangemanagementinorganisations-090903073821-phpapp01.ppt
changemanagementinorganisations-090903073821-phpapp01.pptDrMohammedSayed1
 
Week 2 Kantian & Virtue Ethics (1).pdf
Week 2 Kantian & Virtue Ethics (1).pdfWeek 2 Kantian & Virtue Ethics (1).pdf
Week 2 Kantian & Virtue Ethics (1).pdfDr. Russell Rodrigo
 
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional Blackmail
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional BlackmailEmotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional Blackmail
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional BlackmailMichael Lee
 
Psychoanalysis assignmant.pdf
Psychoanalysis assignmant.pdfPsychoanalysis assignmant.pdf
Psychoanalysis assignmant.pdfSyedAzhar72
 
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docx
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docxThe Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docx
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docxmehek4
 
Systems Thinking by Mirza Yawar Baig
Systems Thinking by Mirza Yawar BaigSystems Thinking by Mirza Yawar Baig
Systems Thinking by Mirza Yawar BaigMirza Yawar Baig
 
Ethics Course Powerpoint
Ethics Course PowerpointEthics Course Powerpoint
Ethics Course PowerpointLindsey Skinner
 

Similar to [Type text][Type text][Type text]The Fif.docx (13)

MGMT 3700 Best Practices in Diversity Leveraging Differe.docx
MGMT 3700 Best Practices in Diversity Leveraging Differe.docxMGMT 3700 Best Practices in Diversity Leveraging Differe.docx
MGMT 3700 Best Practices in Diversity Leveraging Differe.docx
 
Mjrsm. Dulaan.pptx
Mjrsm. Dulaan.pptxMjrsm. Dulaan.pptx
Mjrsm. Dulaan.pptx
 
Making ethical decisions
Making ethical decisionsMaking ethical decisions
Making ethical decisions
 
changemanagementinorganisations-090903073821-phpapp01.ppt
changemanagementinorganisations-090903073821-phpapp01.pptchangemanagementinorganisations-090903073821-phpapp01.ppt
changemanagementinorganisations-090903073821-phpapp01.ppt
 
Change Management In Organisations
Change Management In  OrganisationsChange Management In  Organisations
Change Management In Organisations
 
Week 2 Kantian & Virtue Ethics (1).pdf
Week 2 Kantian & Virtue Ethics (1).pdfWeek 2 Kantian & Virtue Ethics (1).pdf
Week 2 Kantian & Virtue Ethics (1).pdf
 
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional Blackmail
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional BlackmailEmotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional Blackmail
Emotional Blackmail: 3 Helpful Tips To Protect Yourself From Emotional Blackmail
 
Psychoanalysis assignmant.pdf
Psychoanalysis assignmant.pdfPsychoanalysis assignmant.pdf
Psychoanalysis assignmant.pdf
 
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docx
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docxThe Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docx
The Ethical ExecutiveTraps that Impair Acting EthicallyBy Robe.docx
 
6 Pillars Of Influence
6 Pillars Of Influence6 Pillars Of Influence
6 Pillars Of Influence
 
The Six Pillars of Character
The Six Pillars of CharacterThe Six Pillars of Character
The Six Pillars of Character
 
Systems Thinking by Mirza Yawar Baig
Systems Thinking by Mirza Yawar BaigSystems Thinking by Mirza Yawar Baig
Systems Thinking by Mirza Yawar Baig
 
Ethics Course Powerpoint
Ethics Course PowerpointEthics Course Powerpoint
Ethics Course Powerpoint
 

More from hanneloremccaffery

 Explain how firms can benefit from forecastingexchange rates .docx
 Explain how firms can benefit from forecastingexchange rates .docx Explain how firms can benefit from forecastingexchange rates .docx
 Explain how firms can benefit from forecastingexchange rates .docxhanneloremccaffery
 
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docxhanneloremccaffery
 
•No less than 4 pages causal argument researched essay •In.docx
•No less than 4 pages causal argument researched essay •In.docx•No less than 4 pages causal argument researched essay •In.docx
•No less than 4 pages causal argument researched essay •In.docxhanneloremccaffery
 
•Focus on two or three things in the Mesopotamian andor Ovids ac.docx
•Focus on two or three things in the Mesopotamian andor Ovids ac.docx•Focus on two or three things in the Mesopotamian andor Ovids ac.docx
•Focus on two or three things in the Mesopotamian andor Ovids ac.docxhanneloremccaffery
 
•Langbein, L. (2012). Public program evaluation A statistical guide.docx
•Langbein, L. (2012). Public program evaluation A statistical guide.docx•Langbein, L. (2012). Public program evaluation A statistical guide.docx
•Langbein, L. (2012). Public program evaluation A statistical guide.docxhanneloremccaffery
 
•Chapter 10 Do you think it is possible for an outsider to accura.docx
•Chapter 10 Do you think it is possible for an outsider to accura.docx•Chapter 10 Do you think it is possible for an outsider to accura.docx
•Chapter 10 Do you think it is possible for an outsider to accura.docxhanneloremccaffery
 
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docxhanneloremccaffery
 
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docxhanneloremccaffery
 
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docxhanneloremccaffery
 
[Type here]Ok. This school makes me confused. The summary of t.docx
[Type here]Ok. This school makes me confused. The summary of t.docx[Type here]Ok. This school makes me confused. The summary of t.docx
[Type here]Ok. This school makes me confused. The summary of t.docxhanneloremccaffery
 
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docxhanneloremccaffery
 
© 2016 Laureate Education, Inc. Page 1 of 3 RWRCOEL Prof.docx
© 2016 Laureate Education, Inc.   Page 1 of 3 RWRCOEL Prof.docx© 2016 Laureate Education, Inc.   Page 1 of 3 RWRCOEL Prof.docx
© 2016 Laureate Education, Inc. Page 1 of 3 RWRCOEL Prof.docxhanneloremccaffery
 
© 2022 Post University, ALL RIGHTS RESERVED Due Date.docx
© 2022 Post University, ALL RIGHTS RESERVED  Due Date.docx© 2022 Post University, ALL RIGHTS RESERVED  Due Date.docx
© 2022 Post University, ALL RIGHTS RESERVED Due Date.docxhanneloremccaffery
 
{DiscriminationGENERAL DISCRIMINATI.docx
{DiscriminationGENERAL DISCRIMINATI.docx{DiscriminationGENERAL DISCRIMINATI.docx
{DiscriminationGENERAL DISCRIMINATI.docxhanneloremccaffery
 
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docxhanneloremccaffery
 
© 2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx
©  2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx©  2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx
© 2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docxhanneloremccaffery
 
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docxhanneloremccaffery
 
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docxhanneloremccaffery
 
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docxhanneloremccaffery
 

More from hanneloremccaffery (20)

 Explain how firms can benefit from forecastingexchange rates .docx
 Explain how firms can benefit from forecastingexchange rates .docx Explain how firms can benefit from forecastingexchange rates .docx
 Explain how firms can benefit from forecastingexchange rates .docx
 
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx
•POL201 •Discussions •Week 5 - DiscussionVoter and Voter Tu.docx
 
•No less than 4 pages causal argument researched essay •In.docx
•No less than 4 pages causal argument researched essay •In.docx•No less than 4 pages causal argument researched essay •In.docx
•No less than 4 pages causal argument researched essay •In.docx
 
•Focus on two or three things in the Mesopotamian andor Ovids ac.docx
•Focus on two or three things in the Mesopotamian andor Ovids ac.docx•Focus on two or three things in the Mesopotamian andor Ovids ac.docx
•Focus on two or three things in the Mesopotamian andor Ovids ac.docx
 
•Langbein, L. (2012). Public program evaluation A statistical guide.docx
•Langbein, L. (2012). Public program evaluation A statistical guide.docx•Langbein, L. (2012). Public program evaluation A statistical guide.docx
•Langbein, L. (2012). Public program evaluation A statistical guide.docx
 
•Chapter 10 Do you think it is possible for an outsider to accura.docx
•Chapter 10 Do you think it is possible for an outsider to accura.docx•Chapter 10 Do you think it is possible for an outsider to accura.docx
•Chapter 10 Do you think it is possible for an outsider to accura.docx
 
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx
·         Bakit Di gaanong kaganda ang pagturo sa UST sa panahon.docx
 
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx
·YOUR INDIVIDUAL PAPER IS ARGUMENTATIVE OR POSITIONAL(Heal.docx
 
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx
·Write a 750- to 1,Write a 750- to 1,200-word paper that.docx
 
[Type here]Ok. This school makes me confused. The summary of t.docx
[Type here]Ok. This school makes me confused. The summary of t.docx[Type here]Ok. This school makes me confused. The summary of t.docx
[Type here]Ok. This school makes me confused. The summary of t.docx
 
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx
© 2020 Cengage Learning®. May not be scanned, copied or duplic.docx
 
© 2016 Laureate Education, Inc. Page 1 of 3 RWRCOEL Prof.docx
© 2016 Laureate Education, Inc.   Page 1 of 3 RWRCOEL Prof.docx© 2016 Laureate Education, Inc.   Page 1 of 3 RWRCOEL Prof.docx
© 2016 Laureate Education, Inc. Page 1 of 3 RWRCOEL Prof.docx
 
© 2022 Post University, ALL RIGHTS RESERVED Due Date.docx
© 2022 Post University, ALL RIGHTS RESERVED  Due Date.docx© 2022 Post University, ALL RIGHTS RESERVED  Due Date.docx
© 2022 Post University, ALL RIGHTS RESERVED Due Date.docx
 
{DiscriminationGENERAL DISCRIMINATI.docx
{DiscriminationGENERAL DISCRIMINATI.docx{DiscriminationGENERAL DISCRIMINATI.docx
{DiscriminationGENERAL DISCRIMINATI.docx
 
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx
~UEER THEORY AND THE JEWISH QUESTI01 Daniel Boyarin, Da.docx
 
© 2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx
©  2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx©  2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx
© 2017 Cengage Learning. All Rights Reserved.Chapter Twelve.docx
 
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx
`HISTORY 252AEarly Modern Europe from 1500 to 1815Dr. Burton .docx
 
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx
^ Acadumy of Management Journal2001. Vol. 44. No. 2. 219-237.docx
 
`Inclusiveness. The main.docx
`Inclusiveness. The main.docx`Inclusiveness. The main.docx
`Inclusiveness. The main.docx
 
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx
__MACOSXSujan Poster._CNA320 Poster Presentation rubric.pdf.docx
 

Recently uploaded

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...Pooja Nehwal
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 

Recently uploaded (20)

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 

[Type text][Type text][Type text]The Fif.docx

  • 1. [Type text] [Type text] [Type text] The Fifteen Ethical Traps and Lessons Learned on Avoiding Them Student name: Ethical Trap and Avoidance Mechanism One: Justification Justification trap involves people justifying bad decisions and unethical behavior by claiming it is necessary or the need of the hour. This trap makes people think that the reason they are being involved in unethical behavior is for greater good that will come out of it. The world has seen many signs of such justification traps. Killing a particular group of people that
  • 2. happen to be from a particular religion has been justified time and again for saving their own religion. This trap can be tackled by using reputation perspective. This perspective calls for a person to take responsibility and do the right thing at all the times. This perspective has principles that ask a person to do justice, show integrity and courage. Ethical Trap and Avoidance Mechanism Two: Money Money is thought of the means to achieve happiness. It is thought of as one the goals of life. This trap asks for people to make sure they get as much money as they can, leaving the way to achieve it to attain the happiness they require. A person is judged by money in world and it has replaced all of the comparison metrics. Earning money fast calls for taking wrong measures to get easy profits, give people a chance to not pay taxes and indulge in wrong and illegal activities. This trap can be tackled by ensuring people are weighed on the scale of their work, their attitude towards other people and giving money less weightage in life. Ethical Trap and Avoidance Mechanism Three: Conflicts of Interest Conflicts of Interest trap involves getting in a fix related to a situation where you get into the middle of it. It means that there are two parties and you can make sure only one party gets the benefit while other loses. This conflict of interest is created when a person solves such conflict by seeing where it would benefit him the most. This involves taking of bribe from one party in order to rule the conflict in his favor. Mill’s principles can make sure conflict of interest is avoided. Always act what is best as per rules. Integrity and honesty can make sure the person makes the best decision. Never accepting any favor and working under rules would make sure it is avoided at all times. Ethical Trap and Avoidance Mechanism Four: Faceless victims
  • 3. This trap involves generalizing victims. By this the unethical behavior done towards those affected diminishes in the mind of the person who did it. This trap involves not picturing the pain of the humans to make it easy for not taking any responsibility of the damage it caused. People died in a war referred to just as numbers is a part of this trap. This trap can be avoided by ensuring that all people are looked at the same way. The human factor shouldn’t go away from any victim. Responsibility should be taken and measures should be taken that damage created can be compensated in some other way rather than shrugging off. True integrity and courage principles are required to avoid this trap. Ethical Trap and Avoidance Mechanism Five: Conformity This trap is aligning attitude, beliefs and behavior just to fit in better. If the coworkers don’t work diligently or are not honest, it becomes a situation for them where they need to do the same or be nagged all time and left out of the group because they do something which you are not doing. Conformity can be avoided by self-satisfaction. If a person knows he does not require a group to be happy, he or she can work as they wish and they don’t need to change their beliefs. Strength and integrity can ensure a person does what is required at all times and there is no need to be like someone else just to fit in. Ethical Trap and Avoidance Mechanism Six: Advantageous Comparison This trap deals with comparison one’s action with something worse than the action. It gives the satisfaction that what the person did is always better that he could have done and so the action gets validated since he something better than what he could have done. This trap can be avoided by making sure the action is compared with something of equal nature of higher nature or not at all. If a person is comparing against a higher action, he will be knowing what he did wrong and then next time onwards it
  • 4. won’t be repeated. Ethical Trap and Avoidance Mechanism Seven: Obedience to Authority This trap involves showing obedience to someone who is having more powers than you. If manager says something we to do which may be wrong, an employee will show obedience because non obedience can get him fired. Hence obedience without knowing the nature of work or consequences of the action is an ethical trap because of the fear. This trap can be avoided by making sure that work is done for the firm’s welfare and not because we need to please authority. Authority can be asked for explanation as to why a particular task has to be done. Blind faith on authority can be avoided by strength to do always right and question when it is wrong. Ethical Trap and Avoidance Mechanism Eight: Alcohol This trap involves taking alcohol to ensure all the bad feelings are washed away. These feelings come become of doing something bad and being intoxicated makes it forget temporarily. This trap can be avoided by knowing that alcohol is not a permanent solution. It requires courage and integrity to accept that something has been done which is not right and it will take courage to accept and work on it to make sure such things are not repeated. Ethical Trap and Avoidance Mechanism Nine: Contempt for Victim This trap involves dehumanizing victims to make it easier to harm them. When victims are looked as just numbers and employees are looked upon as hired help, it gives the authority power to look humans as just people. This can be avoided by seeing people as human. This will require inner strength and the ability to see the impact of the harm we are going to do upon them. Keeping ourselves in their
  • 5. position can give a reality check which can ensure they are looked as humans only in future. Ethical Trap and Avoidance Mechanism Ten: Competition This trap involves competition between two or more people or parties. Competition is a feeling of going ahead of other. This competition gets so fierce sometimes which involves one party or both party sometimes to break rules and move on to unethical practices to harm other and get ahead of the other. This trap can be avoided by having a mutual respect among the parties. Both competitors can coexist only if they have mutual respect. Competition should be done in a fair manner and it is good competition if everything is done under rules and regulations. Ethical Trap and Avoidance Mechanism Eleven: We Won’t get Caught This trap related to the feeling of self-denial that anything would happen to us if we are doing something wrong. Lack of faith in justice or crooked system often enables people to get this feeling that they are doing unethical practices in the safest way possible and it won’t be traced back to them. This can be avoided by always seeing that justice will prevail sooner or later. Even if the justice fails to catch them, they will be haunted by their inner conscience. Integrity and honesty in the work will ensure that people always do the right thing and they won’t ever get the feeling that we won’t be caught since they would always know what they did. Ethical Trap and Avoidance Mechanism Twelve: Anger This trap involves covering up fear by showing hostility towards anyone. Anger covers guilt but it also keeps people at distance from the instance anger is showed on. Anger is a very powerful emotion which can lead people to be aggressive in nature. This can be avoided by having sympathy and love towards
  • 6. every person. When anger itself gets weak then a person will have no fear in showing the guilt he committed towards the person. Ethical Trap and Avoidance Mechanism Thirteen: Small Steps This trap involves committing unethical acts in small steps. This makes the person committing the unethical acts tolerant of the unethical nature of the steps. It becomes more severe since a person gets accustomed to the unethicality of it and he keeps raising the bar of the small step for himself. This can be avoided in the initial stages itself. Every small step should be seen as a sign of guilt and it should never be accepted but should be dealt strongly to ensure the gravity never increases. Ethical Trap and Avoidance Mechanism Fourteen: Tyranny of Goals This trap asks for people to move fast in order to achieve their goals. The goals should be achieved even if it means cutting short on some of the goals to reach the prime goal. This can involve taking short cut methods and unethical approach just to finish the work. This can be avoided by making sure that goals are completed as they were desired. Nothing should get left behind nor were any goals modified just to claim goal has been reached. The quality and the desired form of the goal should be untouched. Ethical Trap and Avoidance Mechanism Fifteen: Don’t make Waves This trap deals with showing authority to keep everyone quiet on the subject matter to avoid any suspicion or challenge. This trap ensures that everyone stays quiet and there are no measures to unearth a matter which is in suspicion for unethical behavior. This trap can be avoided by ensuring that everyone gets a say in the matter. Meetings are regular and everyone is allowed to speak their mind without the fear of getting reprimanded in
  • 7. the later stage. Free thinking and the ability to accept any wrong doing can ensure that such a trap is avoided. A Controlled Study of Clicker-Assisted Memory Enhancement in College Classrooms AMY M. SHAPIRO1* and LEAMARIE T. GORDON2 1Psychology Department, University of Massachusetts Dartmouth, Dartmouth, MA, USA 2Psychology Department, Tufts University, Medford, MA, USA Summary: Personal response systems, commonly called ‘clickers’, are widely used in secondary and post-secondary classrooms. Although many studies show they enhance learning, experimental findings are mixed, and methodological issues limit their conclusions. Moreover, prior work has not determined whether clickers affect cognitive change or simply alert students to information likely to be on tests. The present investigation used a highly controlled methodology that removed subject and item differences from the data to explore the effect of clicker questions on memory for targeted facts in a live classroom and to gain a window on the cognitive processes affecting the outcome. We found that in-class clicker questions given in a university psychology class augmented performance on delayed exam
  • 8. questions by 10–13%. Experimental results and a class survey indicate that it is unlikely that the observed effects can be attributed solely to attention grabbing. Rather, the data suggest the technology invokes the testing effect. Copyright © 2012 John Wiley & Sons, Ltd. Personal response systems allow instructors to present multiple-choice questions in any classroom equipped with a digital projection system. Students are required to purchase a remote (commonly called a ‘clicker’) that allows them to ‘click in’ responses, which are recorded by a receiver. With the instructor’s remote, a few button clicks allow instant projection of class responses to provide immediate feedback to students and also upload students’ responses to a grade book. Clickers have been used for a variety of educational purposes including teaching case studies (Brickman, 2006; Herried, 2006), replicating published studies in class (Cleary, 2008), and electronic testing (Epstein et al., 2002). On the basis of published reports, however, the most common use appears to be during lectures for assessing students’ compre- hension of class material in real time and improving participa- tion and attendance (Beekes, 2006; Poirier & Feldman, 2007; Shih, Rogers, Hart, Phillis, & Lavoie, 2008). Although studies of clicker effectiveness have yielded mixed results (discussed later), the bulk of evidence indicates that the technology is effective for enhancing learning. Most prior studies, however, have only compared clicker classrooms with control classrooms not using clickers. Further, to our knowledge, no published work to date has directly explored the cognition underlying clicker effects. Here, we take a different approach and examine the effect of clicker questions on memory for specific bits of factual knowledge in clicker-assisted classrooms. The present
  • 9. experiment was designed to answer two questions. Specifi- cally, does clicker use promote learning in the classroom? If so, do the observed improvements reflect true cognitive change or are the enhancements simply a reflection of greater emphasis placed on clicker-targeted information? To explain the motivation behind this work, the following section will briefly review the literature on clicker-assisted learning and methodological concerns that may limit any conclusions that can be made. A discussion of cognitive mechanisms that may explain clicker effects will provide a foundation for the specific research questions addressed by the study. CLICKER-ASSISTED LEARNING OUTCOMES Many studies employing indirect measures of learning have reported positive effects of clickers, such as class participation (Draper & Brown, 2004; Stowell & Nelson, 2007; Trees & Jackson, 2007) and perceptions of learning (Hatch, Jensen, & Moore, 2005), across various disciplines. Yet others report no effect of clickers on indirect measures such as attendance, engagement, or attentiveness (e.g. Morling, McAuliffe, Cohen, & DiLorenzo, 2008). Such varied results exemplify the array of findings within clicker literature. More relevant to the present study, investigations employing direct learning measures have also yielded somewhat mixed results. Stowell and Nelson (2007) gave laboratory subjects a simulated introductory psychology lecture and compared test performance between groups asked to either use clickers or do other sorts of participative activities during the lecture. They found no differences between groups on learning outcome measures. Kennedy and Cutts (2005), however, observed some clicker effects but found that the strength of the relationship between clicker use and learning outcome measures hinged on how successful students were in answering the clicker questions. Despite such discouraging reports, the majority of
  • 10. published studies exploring direct effects of clickers on learn- ing have yielded positive results. Ribbens (2007) found that introductory biology students performed 8% better on tests than his class 2 years prior, before adopting clickers, and Morling et al. (2008) reported higher mean test scores on two of four tests in their clicker classes than in their no-clicker classes. Among studies reporting positive learning effects of clickers, however, there is no consensus on the nature of the effect. One area where findings differ is in studies that have examined the effect of clicker questions on the specific exam questions they target. In one such study, Shapiro (2009) integrated clicker question performance as a graded part of an introductory psychology course. She targeted specific exam questions with in-class clicker questions and compared performance on targeted test questions with that of a control class that did not use clickers. The lectures, course content, and test questions were identical between classes. Performance on clicker- targeted test questions was 20% higher in the experimental *Correspondence to: Amy M. Shapiro, Psychology Department, University of Massachusetts Dartmouth, 285 Old Westport Road, Dartmouth, MA 02747-2300, USA. E-mail: [email protected] Copyright © 2012 John Wiley & Sons, Ltd. Applied Cognitive Psychology, Appl. Cognit. Psychol. 26: 635– 643 (2012) Published online 19 June 2012 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/acp.2843
  • 11. class. Of course, class differences could account for some of the effect. Evidence against group differences was provided by a set of questions targeted as controls, for which neither class saw clicker questions. Performance on those items differed by just under 3% between classes. Although error due to differences between items chosen for control and test conditions may have been a factor, the data do point to an effect of clickers on targeted exam question performance, but not on untargeted question performance. Using a similar methodology, Mayer et al. (2009) also evaluated clicker-assisted learning. In addition to clicker and control classes, they used a third no-clicker class, which was given questions on paper to answer at the end of each class rather than using clickers. Like Shapiro (2009), they targeted specific exam questions with in-class questions in both clicker and no-clicker classes. In the primary analysis, they evaluated overall exam performance, including exam questions that were directly targeted by clicker questions (similar items) and those that were not (dissimilar items). When the total exam score was used as the dependent measure, students using clickers performed better compared with those not using clickers. Mayer et al. also conducted a secondary analysis, however, comparing student perfor- mance on ‘similar’ versus ‘dissimilar’ exam questions (see their Table 2). They reported no significant performance differences on ‘similar’ test questions between the clicker and no-clicker classes and the control classes. The clicker class, however, performed significantly better on ‘dissimilar’ items than the other two classes. It appears, then, that the overall effect of clickers found in the primary analysis stemmed from the dissimilar items. Although Mayer et al. and Shapiro both found that clicker use improved test performance, their findings do not cohere. Mayer et al. found a positive effect of clicker questions on untargeted (dissimilar) test items but not targeted (similar) items, whereas Shapiro found performance
  • 12. enhancement only on targeted (similar) questions. The literature on clicker-assisted learning is very inconsis- tent in its findings. What may be at the root of differential results among so many studies? A number of important methodological issues within studies reporting clicker effects may be the answer. One source may be class or instructor differences within studies that have compared clicker-adopting classes with non-adopting classes, as these studies are vulnerable to the error introduced by individual and group differences. Item differences between clicker-targeted and control test items also may be problematic, as lack of counter- balancing between conditions also introduces error. Moreover, lack of standardization or control regarding the strength of the relationships between clicker questions and test questions creates a potential for variability in strength of the treatment between items in a study, thus threatening internal validity. Finally, student motivation varies between studies, as students may be enticed to participate through varied means such as extra credit, graded tests, or laboratory credit. Without access to multiple sites or classes to create a cluster randomized design (Raudenbush, 1997), it is difficult to conduct a true experiment in a natural classroom. How- ever, the present investigation combined a within-subjects and within-items design that controls variability from both factors while retaining ecological validity. We know of no other study of clicker-assisted learning to use such a design in a study of content learning (but see Stowell, Oldham, & Bennett, 2010, and Roediger, Agarwal, McDaniel, & McDermott, 2011). In addition to providing greater control, the within-subjects design offers a strong test of clicker effects on targeted material, as clickers will be present in the class- room throughout the experiment. In this way, the experiment is not a simple ‘clicker classroom versus no-clicker classroom’
  • 13. study. Instead, all subjects were exposed to clickers, and the dependent variable measured the effect of clicker questions on the acquisition of specific concepts targeted by clicker questions. If clicker effects are still detected under these conditions, the study will have found strong evidence for clicker-enhanced learning of targeted content. COGNITION AND CLICKERS As elucidated in the previous section, many researchers have made claims about the positive effect of clicker technology on learning and memory. It is possible, however, that clicker questions merely highlight important ideas for students. In other words, the effect may come about by prompting students to direct attention resources to specific items during class and in subsequent study. Attention is a necessary first step in creating a memory, so anything that increases attention holds the possibility of enhancing memory. A savvy student should be able to glean from in-class questions the information deemed important by the instructor. It would make sense to direct study efforts toward those topics. If this sort of attention grabbing is at the root of the learning enhancements observed in some clicker studies, the effects are not particularly interesting from a cognitive or theoretical point of view. It would also bring into question whether the effort required to generate clicker questions, not to mention the expense of the hardware to students, is worthwhile. After all, it might be just as effective to give students lists of important topics to attend to in class and during study. A second and more theoretically interesting possibility, one that would support the use of clickers as a means of affecting cognitive change, is that clicker-induced retrieval acts as a source of memory encoding. Known as the testing effect, Karpicke, Roediger, and others have documented that the act of retrieving information from memory can
  • 14. strengthen memory and improve later recall or recognition (Butler, Karpicke, & Roediger, 2007; Carrier & Pashler, 1992; Glover, 1989; Karpicke & Roediger, 2007a, 2007b, 2008; Roediger & Karpicke, 2006a). The testing effect has been demonstrated using free-recall (Jacoby, 1978; Szpunar, McDermott, & Roediger, 2008), short-answer (Agarwal, Karpicke, Kang, Roediger, & McDermott, 2008), and multiple- choice (Duchastel, 1981; Nungester & Duchastel, 1982) tests. Moreover, it has been shown for various types of materials such as word lists (Karpicke & Roediger, 2006a; Tulving, 1967), paired associates (Allen, Mahler, & Estes, 1969), and text (Nungester & Duchastel, 1982; Roediger & Karpicke, 2006a). Why might testing through clickers (or other means) strengthen memory? One mechanism through which clickers might work is to create encoding conditions that mirror those 636 A. M. Shapiro and L. T. Gordon Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) at retrieval, a benefit traditional lecture-based learning does not offer (Blaxton, 1989; Morris, Bransford, & Franks, 1977). In other words, testing may create conditions under which transfer-appropriate processing may occur. That is, answering multiple-choice questions during class or study may enhance performance on multiple-choice exam ques- tions. In fact, Nungester and Duchastel (1982) provided support for the transfer-appropriate processing explanation, as they found that short-answer and multiple-choice tests improved later performance on final multiple-choice and short-answer tests, but only when the formats were matched.
  • 15. Although transfer-appropriate processing is one reason- able hypothesis about the testing effect, there is evidence to support another. Specifically, there is evidence that the process of retrieval itself strengthens or otherwise alters the memory trace, a possibility proposed by Bjork (1975). With that idea in mind, Kang, McDermott, and Roediger (2007) theorized that, if retrieval is a factor, a more demanding retrieval task should produce stronger testing effects than a simpler task. They found that, as long as feedback was offered during initial tests, short-answer tests improved performance on a final test better than the multiple-choice or control conditions. A similar effect was reported by McDaniel, Anderson, Derbish, and Morrisette (2007), who also found that a more demanding, short-answer test showed the greatest learning improvement. Results of Kang et al. (2007) support Bjork’s (1975) notion that the act of retrieval may strengthen the memory trace. However, they also point to the importance of feedback in the testing effect, a topic that has been much studied in the literature. Overall, empirical research has demonstrated that feedback has a generally positive effect on learning outcomes (e.g. Butler, Karpicke, & Roediger, 2007; Pashler, Cepeda, Wixted, & Rohrer, 2005; Sassenrath & Gaverick, 1965). Feed- back as an explanatory mechanism for the testing effect is very relevant to the exploration of clicker effects because many instructors offer immediate feedback to clicker responses by projecting graphs of class polling results. It is important to note that the testing effect has been demonstrated in many experiments not employing feedback (Kang et al., 2007, Experiment 1; Marsh, Agarwal, & Roediger, 2009; Roediger & Karpicke, 2006a, 2006b), so regardless of the factors contributing to feedback effects, some other mechanism unique to testing appears to be working either alongside or integrated with feedback.
  • 16. GOALS OF THE PRESENT STUDY The published literature on clicker learning effects is troubled by methodological issues that impede clear understanding of the technology’s effect on learning and memory. Thus, the first goal of the present study was to employ a methodology that controlled subject and item differences. Toward this end, a series of clicker questions were written for targeted exam questions that were offered in two college-level clicker-based classrooms, which provided ecologically valid conditions under which to examine clicker effects. Half the questions served as control items in one class and as clicker-targeted items in the other. In this way, all subjects and all items served in both conditions, thus eliminating any error introduced by possible item and subject differences. It is important to note that the present study was not designed as a general investiga- tion of ‘clickers versus no clickers’ in the classroom. Rather, it was aimed at examining the effect of clicker questions on acquisition of the specific information they target within a clicker classroom. If clicker effects stem from a general effect of questioning in class, there should be no difference between clicker and control conditions in the present study. In this way, the present study is a strong test of clicker effects, as the within-subjects design biases the results against the study’s main hypothesis if clicker effects are general rather than specific to the targeted information. In addition to exploring the learning effects of clickers, a second aim of the experiment was to rule out the possibility that clickers work by alerting students to the content of future exam questions. Thus, we also compared performance on test items targeted with clicker questions with perfor- mance on the same items when students were told the information would be on the test. If clickers work by
  • 17. invoking the testing effect rather than alerting students to important information, performance on clicker-targeted exam questions should be equal to or better than performance on the same items when attention ‘flags’ are given. If the attention-grabbing hypothesis can be ruled out, the testing effect will be the most reasonable explanation for clicker effects. If cognitive change due to the testing effect can be identified as the source of clicker effects on test performance, it would mean that clicker technology offers a true learning advantage rather than mere study prompts. Such a result would be important to understanding the cognition underlying clicker use and pedagogical practice. METHOD The experiment was designed to test two distinct hypotheses. The first was that in-class clicker questions would have a positive effect on students’ ability to remember factual information and answer delayed exam questions on the same topic. If Hypothesis 1 is correct, items targeted by clicker questions will be answered correctly more often than when they are not targeted by clicker questions. This result will also serve as an important validity check of our methodology. Because we used a within-subjects design, there is the possibil- ity that the presence of clickers in the classroom will boost performance on non-targeted items. A significant difference between the clicker and simple control conditions will demon- strate that the presence of the clickers did not contaminate the control condition. The second hypothesis was that clicker-mediated perfor- mance improvement is due to directing students’ attention to the relevant material, thus flagging certain information as important and likely to be on the exams. Hypothesis 2 leads to the prediction that targeting exam questions with alerts will not increase exam performance more than targeting the same
  • 18. items with clicker questions. If subjects are merely being alerted to important topics by clicker questions, explicit alerts should yield greater performance than the implied alerts offered by clicker questions. In addition to testing these Clicker use enhances memory 637 Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) hypotheses with direct measures of learning outcomes, a survey was given to probe students’ awareness of what helped them remember class information and direct their study efforts. Because the cognitive processes that underlie memory forma- tion and trace strengthening are generally outside a learner’s awareness, students should not be consciously aware of the role of clicker use in their test performance if the testing effect is at work. Subjects Participants were undergraduates at a state university in the eastern United States, enrolled in one of two introductory psychology classes taught by one of the experimenters. In the first class, 131 students were enrolled, 47% of which were men and 72%, 26%, 1%, and 1% were spread across the freshman, sophomore, junior, and senior classes, respec- tively. In the second class, 49% of the 200 students enrolled were men, with 61%, 5%, 32%, and 2% of the students spread across each respective grade level. In both classes, students generally ranged in age from 18–25 years. Students participated in the study as part of their normal coursework, earning points equal to roughly 14% of the final grade by correctly answering in-class questions. Institutional review
  • 19. board approval was sought prior to beginning the study, and a waiver was granted. Materials The class covered 11 topics in general psychology, with a chapter assigned for each in Discovering Psychology (Hockenbury & Hockenbury, 2007). The class met 3 days a week for 50 minutes over 15 weeks and was taught as a typical lecture course with some videos, interactive activities, and participation integrated into many of the lectures. All lectures were accompanied by a PowerPoint presentation that projected main points and illustrations onto a large screen. The slides were projected with an Apple MacBook Pro com- puter and a digital projection system. In-class clicker questions were integrated into the PowerPoint presentations, with individual slides dedicated to single questions. The iClicker system was used to allow students to make their responses to clicker questions. Students were required to purchase their clickers along with their textbooks. The iClicker Company supplies the receiver and software at no cost to adopting instructors. The exams in this class were not cumulative, each covering only the assigned material since the previous test. Four exam items from each course topic (44 exam items), spread across four different tests during the semester, were chosen as targets for the experiment. Performance on these items was the dependent variable. A multiple-choice clicker question was written for each exam question, all of which were also multiple choice. All clicker and exam questions used for the study were factual, asking only about basic, declarative information presented in class. Appendix A provides two sample clicker– exam question pairs. All targeted exam questions were included on the exams for each of the classes participating in the study.
  • 20. Two independent content experts provided validation ratings of the stimuli. Both were professors of psychology that routinely taught introductory psychology. They were presented with each clicker and exam question and asked to rate them on a 7-point scale for the following dimensions: (i) overall quality of the question, (ii) relevance of the information targeted by the clicker–exam item pairs to the content and goals of an introductory psychology course; and (iii) the relationship between each clicker item and each exam question. For each index, higher ratings indicated better-quality questions, greater relevance to the course aims, and a greater relationship between clicker and exam items, respectively. A cutoff mean of 4.5 was set for the quality and relevance scores. Any question or clicker–exam question pair that did not achieve a mean rating of 4.5 on all these dimensions was not used in the study. The mean overall quality rating for the clicker and exam questions used in the study was 6.11 and 6.09, respectively. The range of mean scores was 5.0–6.5 for the clicker questions and 5.5–7.0 for the exam questions. The mean relevance of the material to the course was 6.36, with a range of 4.5–7.0. To establish the strength of the relationship between clicker and exam question pairs, the raters were asked to indicate the extent to which correctly answering each clicker question required retrieval of the same information from memory as each exam question. This was performed for two reasons. The first was to validate each clicker question as a reasonable test of the same knowledge as its intended exam target. Thus, a high rating established that the clicker questions were directly accessing the memory relevant to their respective exam questions. The second reason was to
  • 21. ensure that there were no ‘spillover effects’ of clicker items to exam items for which they were not intended. Toward that end, each clicker question was also evaluated against each of the other exam items used as the study’s dependent measure. Low ratings between each of the clicker questions and the experimental exam questions for which they were not intended indicate low likelihood that clicker questions would affect performance on test questions other than those for which they were intended. Establishing control of the independent variable in this way is important, as such spillover effects could contaminate other items or conditions. If the information required to answer each question in a clicker–exam pair was identical, they were asked to rate it with a 7. If the information was unrelated and easily separa- ble, they were asked to rate it with a 1. Ratings from 2–6 indicated commensurate degrees of relatedness. For the clicker–exam item relatedness scores, pairs of items that did not achieve a mean rating of 4.5 were not used in the study. Likewise, any exam question that was rated with a relatedness score higher than 3 with any clicker question for which it was not intended was not included in the study. The mean rating of the intended clicker–exam question pairs was 6.80 (with a mean range of 5–7), and the mean rating between unintended pairs was 1 (with all ratings at 1). Procedure and experimental design The basic information needed to correctly answer each of the 40 targeted exam questions was presented during class lectures, with the information printed on a projected slide 638 A. M. Shapiro and L. T. Gordon Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012)
  • 22. at an appropriate time during lecture. Clicker questions were offered during lecture at varying time intervals that were not predictable to students. They were offered after a topic was covered and only after the instructor both solicited and answered any questions from the class. Anywhere from one to five clicker questions were asked on any given day in class. Some of these were not experimental clicker items but were used as ‘filler’ questions to provide sufficient credit for students. The percent of correctly answered clicker questions over the course of the semester was calculated as roughly 14% of the final grade. One set of 20 items was chosen to test Hypothesis 1, regarding the learning effects of clickers. A separate set of 20 items was chosen to address Hypothesis 2, regarding the cognition underlying clicker effects. Regardless of which hypothesis was being tested, the clicker questions were offered in the same way, as previously described. Within each subset of 20 clicker–exam question pairs created for the experiment, 10 were assigned to the clicker condition in one class and to the control condition in the other class. The oppo- site assignment was made for the other 10 items. Thus, each subject and item contributed equally to both conditions. Whereas the procedure for presenting clicker questions was identical across item sets used to test each hypothesis, the procedure used to create the control conditions differed. In the case of Hypothesis 1, the information relevant to the control item was simply presented as part of the class lecture, with the information included on a PowerPoint slide. Thus, the conditions were merely set up to compare learning when clickers are used or not. For Hypothesis 1, then, the condi- tions will be referred to as the clicker1 and simple control
  • 23. conditions. For the second hypothesis, the experimental and no-clicker control conditions will be referred to as the clicker2 and attention-grabbing conditions, respectively. When the information necessary to answer an exam question targeted as an attention-grabbing item was presented in class, it was highlighted on the projected slide. The instructor’s remote was used to turn the font red and pulse the text. In addition, the instructor announced, ‘This information is very important. It is likely to be on your test.’ These attention- grabbing ‘flags’ were offered either just before or during the presentation of the relevant information. Students were allowed 40–90 seconds to answer each question, depending upon how long the question was. When a question was projected, a timer also appeared on the screen, thus making students aware of the time limit. After students had submitted their responses, a bar chart showing the percentage of the class to respond with each option was projected onto the screen, and the instructor highlighted the correct answer in red by clicking on the bar. In this way, students received feedback about their responses to each question. If less than 90% of the class correctly answered an item, the instructor explained the correct answer, whether students posed questions or not. On all but a few of the clicker items used in the study, however, students scored 90% or higher and asked no questions after seeing the correct answer. An in-class survey was also given to students 1 week before the end of the semester. The survey was designed to solicit students’ conscious impressions of factors affecting their memory and study strategies. The survey was adminis- tered by projecting the questions onto the screen during class. Each question was projected individually, and the instructor read each aloud. Students were asked to indicate
  • 24. a response to each question using a 5-point Likert scale with their clickers. Students were given 15 seconds to respond to each question. Specifically, students were asked how much the in-class questions, the highlighted information on the PowerPoint slides, and instructor emphasis affected their choices about what to study. They were also asked to rate how much each of those factors enhanced their learning and memory of class material. None of the class results were projected to the class or reported to them before the last test. RESULTS AND DISCUSSION Students who attended fewer than 60% of the classes over the semester were excluded from the analysis, as their exposure to the independent variable was considered too low to reflect accurately the effect of the intervention. Like- wise, students that missed more than one exam were also excluded, as these students were missing at least half the data. A total of 226 subjects were included in the analysis. As a check of the equivalence of the independent and dependent variables used to test the two hypotheses, a paired t-test was conducted to compare subjects’ performance on the 20 exam questions used to test Hypothesis 1 when assigned to the clicker1 condition with the second set of 20 used to test Hypothesis 2 when assigned to the clicker2 condition. Students scored a mean of 69.8 (SD = 17.9) on the clicker1 items and 72.1 (SD = 17.8) on the clicker2 items. The difference was non-significant in a paired t-test, t(225) = 1.62, p > .05. An unpaired t-test was conducted to compare means when calculated by items. Those in the clicker1 condition were correctly answered by a mean of 68.6% (SD = 17.8) of students, and those in the clicker2 condition by 72.5% (SD = 17.8) of students. The difference was non-significant, t(38) = 0.74, p > .05.
  • 25. As proponents of item response theory have shown, learner characteristics cannot be separated from test charac- teristics, as they interact to determine exam performance (Baker, 2001; Van der Linden & Hambleton, 1997). To better account for variability among subjects and test items, comparisons between control and experimental groups were conducted by both subjects and items. According to item response theory, an individual’s latent traits such as intelli- gence and motivation and item factors such as difficulty and discrimination contribute to overall test results. Averag- ing across both subjects and items offers some degree of assurance that the outcome analysis is not unduly influenced by one set of characteristics. That is, agreement between subject and item analyses offers stronger confirmation of an effect than one analysis alone. Hypothesis 1: Clicker questions improve learning Students answered a mean of 61.4% (SD = 17.8) of the targeted exam questions correctly when they were not given in-class clicker questions on the relevant content, as Clicker use enhances memory 639 Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) compared with 69.8% (SD = 17.9) when the same items were included in the clicker condition. The 8.4-point difference between means represents a 13.7% improvement on exam questions from control to clicker conditions, and the differ- ence between a traditional letter grade of D� versus C�. The difference was significant when analyzed by subjects, t(225) = 5.78, p < .001, d = 0.38. The difference was also
  • 26. significant when analyzed by items, t(19) = 3.46, p < .01, d = 0.77. For exam items in the no-clicker control condition, a mean of 62.2% of students answered correctly. When the same items were used in the clicker condition, a mean of 68.6% of students answered correctly. The 6.4-point differ- ence represents a 10.3% performance increase on exam items when in-class clicker questions were asked about relevant content. These results strongly support the conclusion that asking students factual, multiple-choice questions enhances mem- ory for the relevant information on delayed, factual test questions. They suggest that the technology may be taking advantage of the testing effect in the classroom. The magni- tude of the observed effect is not unprecedented, as prior studies have shown that a single testing episode in advance of a final test has been shown to enhance learning by even greater amounts (see Roediger & Karpicke, 2006b for a review). As any reasonable critic would rightly point out, however, it may be the case that clicker questions do not strengthen memory traces or connections leading to them. Rather than affecting true cognitive change, the questions may merely cue students that the instructor deems certain pieces of information to be of particular importance. If so, it would certainly be reasonable for students to focus more on that information during study, thus augmenting perfor- mance on test items targeting that information. Analysis of the second stimulus set and the survey results addresses that issue. Hypothesis 2: Clicker questions improve learning by alerting students to important material The attention-grabbing hypothesis was not supported by the comparison of the clicker2 and attention-grabbing condi- tions. When information was highlighted on class slides and
  • 27. students were told it was important and would be included on the test (the attention-grabbing condition), students correctly answered an average of 70.1% (SD = 17.8) of the targeted exam questions. When they were not told the material was of particular importance but were given clicker questions about the material (the clicker2 condition), they correctly answered 72.1% (SD = 17.2). The difference was not statistically significant, t(225) = 1.33, p > .05. Analyzed by items, an average of 68.7% students correctly answered targeted exam questions when they were in the attention- grabbing condition and 72.5% correctly answered the same items when assigned to the clicker2 condition. The difference just reached significance and had a medium effect size, t(19) = 2.06, p = .05, d = 0.46. In short, offering a clicker question improved performance on delayed exam questions as well or better than explicitly telling students that the information would be on the test. Class survey Unpaired t-tests comparing class responses to the survey questions indicated no significant differences between clas- ses with respect to how they answered any of the survey questions. As such, all of the data for both classes were combined for the analysis. To elicit students’ can did responses, the surveys were anonymous. As such, it was not possible to identify the students that attended fewer than 60% of classes or missed more than one test. Thus, the survey results represent the entire class, rather than the subset of students used for the study. A repeated-measures analysis of variance with a Green- house–Geisser correction comparing students’ responses with the questions probing how much the clicker questions, professor emphasis, and slide emphasis helped them to learn the material was significant, F(1.77, 476.38) = 68.409, p < .001, �2Partial = .20. Students reported that answering the
  • 28. clicker questions was slightly less than moderately helpful in learning the material, as the average rating was 2.84 on a 1–5 scale. The means were 3.68 and 3.39 for professor and slide emphasis, respectively. Pairwise comparisons using the Bonferroni correction indicated that students felt that the clicker questions had significantly less impact on learning class material than both the slide emphasis (p < .01) and the instructor’s verbal remarks (p < .01). The difference between slide and instructor emphasis was also significant, p < .01. Survey questions also probed students for information about what guided their decisions about what to study. If clicker questions were effective because they drew students’ attention to material to be tested, one would expect that students would have used that information to direct their study efforts. Student responses, however, do not indicate that clicker questions were highly influential, as they rated their impact on study choices with a moderate mean of 3.04. Students rated the professor’s verbal remarks and highlighted information on the slides much higher (4.28 and 3.86, respectively) than the clicker questions. A repeated- measures analysis of variance with a Greenhouse–Geisser correction indicated that the differences were significant, F(1.76, 481.34) = 184.012, p < .001, �2Partial = .40. Again, pair- wise comparisons using the Bonferroni correction indicated significant differences between ratings for the clicker and slide emphasis, clicker and instructor emphasis, and slide and instructor emphasis, all at p < .01. In sum, the results of the Hypothesis 1 analysis demon- strate that clicker technology is an effective classroom learning tool. Performance on delayed, targeted exam ques- tions increased significantly when the information was tested in class shortly after learning the material. The test of
  • 29. Hypothesis 2 demonstrated that clicker questions were equally or even more effective than cues about the content of future exams. Although the magnitude of the clicker effect is not great enough to rule out attention grabbing as a factor in clicker effects, attention grabbing does not fully account for clicker effects. The clicker results support the conclusion that the testing effect seems to be working in tandem with attention grabbing to produce the clicker effects established in the test of Hypothesis 1. Although clicker questions may serve to guide some students about what to study, it is clear 640 A. M. Shapiro and L. T. Gordon Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) that more is at work than increased study of clicker-targeted materials. After all, students rated the clicker questions as less influential in guiding their study efforts than actually telling them what would be on the exam. Moreover, one would not expect students to be consciously aware of in-class questions augmenting memory because the testing effect stems from unconscious cognitive processes (we do not have conscious access to the cognitive processes underlying memory construction or consolidation). That prediction is born out by the relatively low ratings of clicker questions as memory enhancers. GENERAL DISCUSSION AND CONCLUSIONS One purpose of the present investigation was to document the positive effects of classroom clicker use on learning by employing a methodology that addresses the shortcomings of some prior studies. Specifically, by using a within-items
  • 30. and within-subjects design, while still conducting the study in live classrooms, the experiment was designed to tighten experimental control while maximizing ecological validity. Because the design was within subjects and students were using clickers in the same lectures in which they were exposed to the control condition content, the significant performance difference between clicker and control items indicates a strong effect of clicker questions on targeted information acquisition. The second goal was to provide evidence about the cognition underlying clicker-assisted learning effects. The experiment demonstrated that clickers are effective pedagogical tools. Performance on delayed exam questions increased significantly when the information was targeted by in-class clicker questions. It also revealed that clicker questions were equally or more effective than cuing students about the information being on a future exam. The results support a role of the testing effect in clicker- assisted learning; however, the equivalent performance of the clicker and attention-grabbing groups in the subject analysis of Hypothesis 2 does not completely rule out the role of attention grabbing in clicker effects. It is likely that the testing effect is working in tandem with attention grab- bing and perhaps some increased study of clicker-targeted information. The data trend seen in the means of that analysis, however, is in a direction opposite to what the attention-grabbing hypothesis predicts. Moreover, the analy- sis by items indicated a significant advantage of clicker questions over alerts, with a moderate effect size, although the survey results indicated students actually studied the information in the alert condition more than the clicker condition. The latter point is remarkable because it reveals that students performed better on the very questions they reported attending to less during study (i.e. the clicker- targeted items). Further, the clicker questions were only offered after information was presented in class, so they
  • 31. could not have served to increase attention during lecture. The attention alerts, however, were often given before or in the middle of explanations, so attention actually should have been greater in the attention condition. On balance, the weight of evidence cannot rule out a role of attention grabbing in clicker effects, so the ability of clicker questions to ‘flag’ information should be further explored in future studies. To whatever degree attention grabbing is at play in clicker effects, there seems to be something about the actual act of answering clicker questions (apart from attention grabbing) that enhances memory for lecture content. One possible mechanism through which answering clicker questions may enhance memory for class material is repetition. That is, clicker questions may merely offer multiple exposures to the information. After the information is provided in class, the clicker questions serve as a second exposure, thus enhancing the strength of memory for the material. However, the magnitude of the improvement seen in the clicker1 versus simple control analysis (10–13%) is hard to explain by a single re-exposure to the material during class. Perhaps students studied clicker-targeted material more, thus increas- ing exposure to the material outside of class. If so, the results might be attributable to repetition effects, after all. The survey results, however, indicated that the alerts were more influential than clicker questions in directing students’ study efforts. Given that students’ self-reports indicate that they spent significantly more time studying the information that was highlighted in class than the information targeted by the clicker questions, one would expect greater repetition and learning in the attention-grabbing condition as opposed to the clicker2 condition. Because performance on items assigned to the clicker2 condition was better than that on items assigned to the attention condition, that possibility is
  • 32. not supported by the data. Although the present study was not designed to specifically rule out repetition effects, it does offer indirect evidence against repetition as the power behind clicker effects. Because the present data make repetition effects unlikely as a source of clicker effects, the most likely explanation is that the testing effect is at work. The mechanism underlying the testing effect has been researched at length, with evidence reported in support of feedback (Butler et al., 2007), transfer-appropriate processing (e.g. Nungester & Duchastel, 1982), and trace strengthening (e.g. Kang et al., 2007). The present study was not designed to distinguish between these possibilities. However, it is logical to conclude that the feedback students receive about their performance was useful in either reinforcing or correcting recently learned information. Because students are often poor judges of their own memory and learning (Bjork, 1999; Koriat, 1993; Koriat & Bjork, 2005), they often confuse familiarity with robust memory. That is, students who spend time re-reading the text or ‘going over’ their notes often lack the metacognitive skills needed to separate the subjective feeling of familiarity gained from this type of lax ‘studying’ from true knowledge. However, clicker questions challenge students to retrieve recently learned information, thus providing unambiguous feedback about their understanding, which may be a factor in their effectiveness. Alternatively, transfer-appropriate processing is another viable explanation for the present results, as the clicker and exam questions were all offered in the same format. Of course, whether each of these interpretations is valid is a question to be explored in future studies directed at distinguishing between all these likely mechanisms during clicker use. Clicker use enhances memory 641 Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit.
  • 33. Psychol. 26: 635–643 (2012) One limitation of the study is that the measure of students’ study emphasis was a self-report, which is less reliable than a direct measure. Although future studies may examine that variable using a different methodology, the narrow focus of the present work was to control as much error as possible in the sample and in the stimuli to determine whether clicker questions enhance retention of targeted material. The present design offers a rigorous test of that hypothesis. Also, it would have been ideal to use the same items to test each hypothesis and fully counterbalance them between the clicker, control, and attention-grabbing conditions. It is a limitation of the study that separate items were used to test each hypothesis, thus preventing direct comparisons between their respective items. The decision was made to create separate stimulus sets for each hypothesis because there were only two classes available for the study. As such, it was not possible to fully counterbalance test items between all three conditions (clicker, no clicker, and attention grabbing), and the tight control attained through full counterbalancing was a crucial methodological issue in this experiment. The current design, however, still allowed the important comparisons necessary to address Hypotheses 1 and 2. The only compari- son that could not be made while simultaneously controlling item differences was between the simple control and atten- tion-grabbing items. Because that comparison would not inform the aims of the study, it was seen as a reasonable compromise. The differences between the attention and simple control groups were, in fact, rather robust in the subject analysis and in the predicted direction in both analyses (70.1% vs 61.4% in the subject analysis and 68.7% vs 62.2% in the item analysis, respectively), suggesting that the attention-grabbing manipulation was indeed effective at
  • 34. promoting attention and study of certain facts. The demon- strated equivalence between questions used in the clicker1 and clicker2 conditions supports the validity of the differences between the attention and simple control groups and thus the validity of the attention-grabbing manipulation. Another limitation was the narrow focus of the investiga- tion necessitated by the within-subjects design, as no clicker- free control condition could be included in the study. Without a comparison group that used no clickers at all, the present results cannot determine whether the benefits of clickers also extended to some degree to the untargeted test questions. It is certainly possible that untargeted question performance was also boosted by clicker use, but just to a lesser extent than the targeted questions. Indeed, some studies have shown an effect of clicker use on untargeted material (e.g. Mayer et al., 2009). Finally, the present study examined only one aspect of learning, fact retention. It did not examine the effect of clicker questions on the develop- ment of conceptual understanding, problem solving, critical thinking, or other aspects of learning. It will be important for future studies to weigh the benefits of clickers in those areas. From the point of view of practice, the data offer encourag- ing news to educators, particularly those teaching large groups of students. The data suggest that although some attention grabbing may contribute to the observed benefits of clickers, the questions are also affecting real cognitive change in the classroom, thus offering real learning advantage to students. With teacher investment of just a few minutes to incorporate a clicker question into a presentation and a minute or so of class time to present, class performance on delayed exam items can be significantly and meaningfully increased. In the present study, the clicker questions were associated with a perfor-
  • 35. mance increase of roughly 10–13%, which seems to be a good return on investment. The technology has its limits, as only so many questions can reasonably be asked in a single class meeting, but the evidence strongly suggests that clickers are a profitable investment for teachers and students. REFERENCES Agarwal, P. K., Karpicke, J. D., Kang, S. K., Roediger, H. L., & McDermott, K. B. (2008). Examining the testing effect with open- and closed-book tests. Applied Cognitive Psychology, 22, 861–876. DOI:10.1002/acp.1391 Allen, G. A., Mahler, W. A., & Estes, W. K. (1969). Effects of recall tests on long-term retention of paired associates. Journal of Verbal Learning & Verbal Behavior, 8(4), 463–470. DOI:10.1016/S0022- 5371(69)80090-3 Baker, F. (2001). The basics of item response theory. College Park, MD: ERIC Clearinghouse on Assessment and Evaluation, University of Maryland. Beekes, W. (2006). The “millionaire” method for encouraging participation. Active Learning in Higher Education: The Journal of the Institute for Learning and Teaching, 7, 25–36. Bjork, R. A. (1975). Retrieval as a memory modifier: An interpretation of negative recency and related phenomena. In R. L. Solso (Ed.), Information
  • 36. processing and cognition: The Loyola symposium (pp. 123– 144). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Bjork, R. A. (1999). Assessing our own competence: Heuristics and illusions. In D. Gopher, & A. Koriat (Eds.), Attention and performance XVII: Cognitive regulation of performance: Interaction of theory and application (pp. 435–459). Cambridge, MA: MIT Press. Blaxton, T. A. (1989). Investigating dissociations among memory measures: Support for a transfer appropriate processing framework. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 3–9. DOI: 10.1037/0278-7393.15.4.657 Brickman, P. (2006). The case of the druid Dracula: A directed “clicker” case study on DNA fingerprinting. Journal of College Science Teaching, 36(2), 48–53. Butler, A. E., Karpicke, J. D., & Roediger, H. L. (2007). The effect of type and timing of feedback on learning from multiple-choice tests. Journal of Experimental Psychology. Applied, 13, 273–281. DOI: 10.1037/1076- 898X.13.4.273 Carrier, M., & Pashler, H. (1992). The influence of retrieval on retention.
  • 37. Memory & Cognition, 20(6), 633–642. Cleary, A. (2008). Using wireless response systems to replicate behavioral research findings in the classroom. Teaching of Psychology, 35, 42–44. DOI: 10.1080/00986280701826642 Draper, S., & Brown, M. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81–94. DOI: 10.1111/j.1365-2729.2004.00074.x Duchastel, P. C. (1981). Retention of prose following testing with different types of tests. Contemporary Educational Psychology, 6, 217– 226. DOI: 10.1016/0361-476X(81)90002-3 Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K. A., Hendel, R. A., Epstein, B. B., & Brosvic, G. M. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. Psychological Record, 52(2), 187–201. Glover, J. A. (1989). The “testing” phenomenon: Not gone but nearly forgotten. Journal of Educational Psychology, 81, 392–299. DOI: 10.1037/0022-0663.81.3.392 Hatch, J., Jensen, M., & Moore, R. (2005). Manna from heaven or “clickers”
  • 38. from hell. Journal of College Science Teaching, 34(7), 36–39. Herried, C. (2006). “Clicker” cases: Introducing case study teaching into large classrooms. Journal of College Science Teaching, 36(2), 43–47. Hockenbury, D. H., & Hockenbury, S. E. (2007). Discovering psychology (4th ed). New York, NY: Worth Publishers, Inc. 642 A. M. Shapiro and L. T. Gordon Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) Jacoby, L. L. (1978). On interpreting the effects of repetitions: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior, 17, 649–667. DOI: 10.1016/S0022- 5371(78)90393-6 Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007). Test format and corrective feedback modulate the effect of testing on memory retention. European Journal of Cognitive Psychology, 19, 528–558. Karpicke, J. D., & Roediger, H. L. (2007a). Repeated retrieval during learning is the key to long-term retention. Journal of Memory and Language, 57, 151–162. DOI: 10.1016/j.jml.2006.09.004
  • 39. Karpicke, J. D., & Roediger, H. L. (2007b). Expanding retrieval practice pro- motes short-term retention, but equally spaced retrieval enhances long-term retention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 704–719. DOI: 10.1037/0278-7393.33.4.704 Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319, 966–968. DOI: 10.1126/science.1152408 Kennedy, G. E., & Cutts, Q. I. (2005). The association between students’ use of an electronic voting system and their learning outcomes. Journal of Computer Assisted Learning, 21, 260–268. DOI: 10.1111/j.1365- 2729.2005.00133.x Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological Review, 100, 609–639. DOI: 10.1037/0033-295X.100.4.609 Koriat, A., & Bjork, R. A. (2005). Illusions of competence in monitoring one’s knowledge during study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 187–194. DOI: 10.1037/0278- 7393.31.2.187 Marsh, E. J., Agarwal, P. K., & Roediger, H. L. (2009).
  • 40. Memorial consequences of answering SAT II questions. Journal of Experimental Psychology. Applied, 15, 1–11. DOI: 10.1037/a0014721 Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34, 51– 57. DOI: 10.1016/j.cedpsych.2008.04.002 McDaniel, M., Anderson, J., Derbish, M., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19, 494–513. DOI: 10.1080/09541440701326154 Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. (2008). Efficacy of per- sonal response systems (“clickers”) in large, introductory psychology classes. Teaching of Psychology, 35, 45–50. DOI: 10.1080/00986280701818516 Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519–533. DOI: 10.1016/S0022- 5371(77)80016-9 Nungester, R. J., & Duchastel, P. C. (1982). Testing versus
  • 41. review: Effects on retention. Journal of Educational Psychology, 74, 18–22. DOI: 10.1037/0022-0663.74.1.18 Pashler, H., Cepeda, N. J., Wixted, J. T., & Rohrer, D. (2005). When does feedback facilitate learning of words? Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(1), 3–8. Poirier, C. R., & Feldman, R. S. (2007). Promoting active learning using individual response technology in large introductory psychology classes. Teaching of Psychology, 34(3), 194–196. Raudenbush, S. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173–185. DOI: 10.1037/ 1082-989X.2.2.173 Ribbens, E. (2007). Why I like clicker personal response systems. Journal of College Science Teaching, 37(2), 60–62. Roediger, H. L., & Karpicke, J. D. (2006a). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17, 249–255. DOI: 10.1111/j.1467-9280.2006.01693.x Roediger, H. L., & Karpicke, J. D. (2006b). The power of testing memory: Basic research and implications for educational practice. Perspectives
  • 42. on Psycho- logical Science, 1, 181–210. DOI: 10.1111/j.1745- 6916.2006.00012.x Roediger,H., Agarwal, P., McDaniel, M., & McDermott, K. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology. Applied, 17, 382–395. DOI: 10.1037/a0026252 Sassenrath, J., & Garverick, C. (1965). Effects of differential feedback from examinations on retention and transfer. Journal of Educational Psychology, 56, 259–263. Shapiro, A. M. (2009). An empirical study of personal response technology for improving attendance and learning in a large class. Journal of the Scholarship of Teaching and Learning, 9(1), 13–26. Shih, M., Rogers, R., Hart, D., Phillis, R., & Lavoie, N. (2008). Community of practice: The use of personal response system technology in large lectures. Paper presented at the University of Massachusetts Conference on Information Technology, Boxborough, MA. Stowell, J., & Nelson, J. (2007). Benefits of electronic audience response systems on student participation, learning, and emotion. Teaching of Psychology, 34, 253–258. DOI: 10.1080/00986280701700391
  • 43. Stowell, J. R., Oldham, T., & Bennett, D. (2010). Using student response systems (“clickers”) to combat conformity and shyness. Teaching of Psychology, 37, 135–140. DOI: 10.1080/00986281003626631 Szpunar, K. K., McDermott, K. B., & Roediger, H. L. (2008). Testing during study insulates against the buildup of proactive interference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1392–1399. DOI: 10.1037/a0013082 Trees, A., & Jackson, M. (2007). The learning environment in clicker classrooms: Students processes of learning and involvement in large university-level courses using student response systems. Learning, Media and Technology, 32, 21–40. DOI: 10.1080/17439880601141179 Tulving, E. (1967). The effects of presentation and recall of material in free- recall verbal learning. Journal of Verbal Learning and Verbal Behavior, 6, 175–184. DOI: 10.1016/S0022-5371(67)80092-6 Van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997). Handbook of modern item response theory. New York: Springer. APPENDIX A SAMPLE CLICKER–EXAM QUESTION PAIRS.
  • 44. Clicker question Exam question Sample 1. Which of the following is true about punishment? A. Punishment is most effective if it always immediately follows the behavior. B. Punishment works by reducing an undesired behavior. C. Punishment can be ineffective if a big enough reward can be had by producing the behavior in question. D. All of the above. Punishment is most effective if:A. it immediately precedes the operant. B. it consistently follows the operant. C. it occasionally follows the operant. D. there is considerable delay between the operant and the punishment. Sample 2. The major difference between a primary and secondary reinforcer is that primary reinforcers are naturally satisfying while a secondary reinforcer A. is something we learn to like. B. is usually an indirect form of a primary reinforcer. C. Both A and B D. None of the above.
  • 45. Whereas a primary reinforcer derives its reinforcing value _____, conditioned reinforcers derive their reinforcing value _____. A. from conditioned reinforcers, from primary reinforcers B. naturally, from primary reinforcers C. from conditioned reinforcers, naturally D. naturally, from conditioned stimuli Clicker use enhances memory 643 Copyright © 2012 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 26: 635–643 (2012) Copyright of Applied Cognitive Psychology is the property of John Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. Name (First/Last): Highlight All Answers
  • 46. Step 1 - Book Search 1. [Instructions] Using Woodbury Library Catalog (library.woodbury.edu), search for the book assigned to you. Above you will see a number to the left of your name, locate that number on the Excel spreadsheet in Moodle to find your assigned book. Once you have found your book in the Catalog, select the dropdown menu from “Libraries to search” and select Woodbury University Library. Once you have located (found) the book in the Catalog, click on the title of the book. 2. Name of the book (Once upon a car by Vlasic). 3. Please answer the following questions: a. What is the full title of the book you found in the Catalog: b. Names of author(s)/editor(s): c. Publisher: d. Copyright (year published): e. Number of Pages: f. Place of publication (city/state): g. What is the OCLC Number: h. How many (what is the number of) related Subject Words that have been applied to this item (you can find the number of subject words at the bottom of the record): i. What is the Location of this book: j. What is the Status of the book: k. What is the full Call Number of the book (example: ND511.5 .K55 A618 2012): 4. Create a proper APA citation:
  • 47. 5. Go to the shelf and locate your assigned book. a. Take a photo of the front cover of the book b. Take a photo of the table of contents c. NOTE: Attached both images to this document (NO images from the Internet are allowed!). Shrink the images and place both images under HERE (DO NOT PUT THEM AT THE END) Step 2 - Library of Congress Classification 1. [Instructions] Using Google, search for: Library of Congress Classification outline. The first result will be the link you will want to select. Click on the link and you will be redirected to the Library of Congress Classification Outline webpage. 2. Using the call number of you book (which you have identified in Step 1, 2.k) answer the following: a. Name of the Class (example K = Law): b. Name of the Subclass (KZ = Law of nations): 3. Under the Subclass: a. What is the call number range for your book (Example: KZ170-173): b. What is that section or call number range called (Example: Annuals): c. Does this properly describe your book? (Make sure you look at the book on the shelf in the library to answer this question): i. Yes/No: ii. Why?: Step 3 – Searching Your Topic 1. What is your Major (example: History): 2. Write out two search words using the word “AND” to find a
  • 48. book in your major (example: economy and students): 3. Using the Woodbury Library catalog, enter your two search words (using the word AND) and run a search. a. How many results did you retrieve from Libraries Worldwide: b. How many results did you retrieve from Woodbury University Libraries: c. How many results did you retrieve from Burbank (if you have zero results, change your search words to find a result): 4. Follow the instructions carefully: 1st: On the left side of the screen, click on the link called “Print Book” under Format, 2nd: Select the second title listed in the results. Now answer the following questions: a. What is the full title of the book you found in the Catalog: b. Names of author(s)/editor(s): c. Publisher: d. Copyright (year published): e. Number of Pages: f. Place of publication (city/state): g. What is the OCLC Number: h. How many (what is the number of) related Subject Words that have been applied to this item (you can find the number of subject words at the bottom of the record): i. What is the Location of this book: j. What is the Status of the book: k. What is the full Call Number of the book (example: ND511.5 .K55 A618 2012): 5. Create a proper APA citation: 6. Go to the shelf and locate your assigned book. a. Take a photo of the front cover of the book b. Take a photo of the table of contents c. NOTE: Attached both images to this document. Shrink the
  • 49. images so they can fit on one page. NO images from the Internet are allowed! 1 Updated: Tuesday, October 04, 2016