[Type text] [Type text] [Type text]
The Fifteen Ethical Traps and Lessons
Learned on Avoiding Them
Student name:
Ethical Trap and Avoidance Mechanism One: Justification
Justification trap involves people justifying bad decisions and unethical behavior by claiming it is necessary or the need of the hour. This trap makes people think that the reason they are being involved in unethical behavior is for greater good that will come out of it. The world has seen many signs of such justification traps. Killing a particular group of people that happen to be from a particular religion has been justified time and again for saving their own religion.
This trap can be tackled by using reputation perspective. This perspective calls for a person to take responsibility and do the right thing at all the times. This perspective has principles that ask a person to do justice, show integrity and courage.
Ethical Trap and Avoidance Mechanism Two: Money
Money is thought of the means to achieve happiness. It is thought of as one the goals of life. This trap asks for people to make sure they get as much money as they can, leaving the way to achieve it to attain the happiness they require. A person is judged by money in world and it has replaced all of the comparison metrics. Earning money fast calls for taking wrong measures to get easy profits, give people a chance to not pay taxes and indulge in wrong and illegal activities.
This trap can be tackled by ensuring people are weighed on the scale of their work, their attitude towards other people and giving money less weightage in life.
Ethical Trap and Avoidance Mechanism Three: Conflicts of Interest
Conflicts of Interest trap involves getting in a fix related to a situation where you get into the middle of it. It means that there are two parties and you can make sure only one party gets the benefit while other loses. This conflict of interest is created when a person solves such conflict by seeing where it would benefit him the most. This involves taking of bribe from one party in order to rule the conflict in his favor.
Mill’s principles can make sure conflict of interest is avoided. Always act what is best as per rules. Integrity and honesty can make sure the person makes the best decision. Never accepting any favor and working under rules would make sure it is avoided at all times.
Ethical Trap and Avoidance Mechanism Four: Faceless victims
This trap involves generalizing victims. By this the unethical behavior done towards those affected diminishes in the mind of the person who did it. This trap involves not picturing the pain of the humans to make it easy for not taking any responsibility of the damage it caused. People died in a war referred to just as numbers is a part of this trap.
This trap can be avoided by ensuring that all people are looked at the same way. The human factor shouldn’t go away from any victim. Responsibility should be taken and measures should be taken .
1. [Type text] [Type text] [Type text]
The Fifteen Ethical Traps and Lessons
Learned on Avoiding Them
Student name:
Ethical Trap and Avoidance Mechanism One: Justification
Justification trap involves people justifying bad decisions
and unethical behavior by claiming it is necessary or the need of
the hour. This trap makes people think that the reason they are
being involved in unethical behavior is for greater good that
will come out of it. The world has seen many signs of such
justification traps. Killing a particular group of people that
2. happen to be from a particular religion has been justified time
and again for saving their own religion.
This trap can be tackled by using reputation perspective.
This perspective calls for a person to take responsibility and do
the right thing at all the times. This perspective has principles
that ask a person to do justice, show integrity and courage.
Ethical Trap and Avoidance Mechanism Two: Money
Money is thought of the means to achieve happiness. It is
thought of as one the goals of life. This trap asks for people to
make sure they get as much money as they can, leaving the way
to achieve it to attain the happiness they require. A person is
judged by money in world and it has replaced all of the
comparison metrics. Earning money fast calls for taking wrong
measures to get easy profits, give people a chance to not pay
taxes and indulge in wrong and illegal activities.
This trap can be tackled by ensuring people are weighed on
the scale of their work, their attitude towards other people and
giving money less weightage in life.
Ethical Trap and Avoidance Mechanism Three: Conflicts of
Interest
Conflicts of Interest trap involves getting in a fix related
to a situation where you get into the middle of it. It means that
there are two parties and you can make sure only one party gets
the benefit while other loses. This conflict of interest is created
when a person solves such conflict by seeing where it would
benefit him the most. This involves taking of bribe from one
party in order to rule the conflict in his favor.
Mill’s principles can make sure conflict of interest is
avoided. Always act what is best as per rules. Integrity and
honesty can make sure the person makes the best decision.
Never accepting any favor and working under rules would make
sure it is avoided at all times.
Ethical Trap and Avoidance Mechanism Four: Faceless victims
3. This trap involves generalizing victims. By this the
unethical behavior done towards those affected diminishes in
the mind of the person who did it. This trap involves not
picturing the pain of the humans to make it easy for not taking
any responsibility of the damage it caused. People died in a war
referred to just as numbers is a part of this trap.
This trap can be avoided by ensuring that all people are
looked at the same way. The human factor shouldn’t go away
from any victim. Responsibility should be taken and measures
should be taken that damage created can be compensated in
some other way rather than shrugging off. True integrity and
courage principles are required to avoid this trap.
Ethical Trap and Avoidance Mechanism Five: Conformity
This trap is aligning attitude, beliefs and behavior just to
fit in better. If the coworkers don’t work diligently or are not
honest, it becomes a situation for them where they need to do
the same or be nagged all time and left out of the group because
they do something which you are not doing.
Conformity can be avoided by self-satisfaction. If a person
knows he does not require a group to be happy, he or she can
work as they wish and they don’t need to change their beliefs.
Strength and integrity can ensure a person does what is required
at all times and there is no need to be like someone else just to
fit in.
Ethical Trap and Avoidance Mechanism Six: Advantageous
Comparison
This trap deals with comparison one’s action with
something worse than the action. It gives the satisfaction that
what the person did is always better that he could have done and
so the action gets validated since he something better than what
he could have done.
This trap can be avoided by making sure the action is
compared with something of equal nature of higher nature or not
at all. If a person is comparing against a higher action, he will
be knowing what he did wrong and then next time onwards it
4. won’t be repeated.
Ethical Trap and Avoidance Mechanism Seven: Obedience to
Authority
This trap involves showing obedience to someone who is
having more powers than you. If manager says something we to
do which may be wrong, an employee will show obedience
because non obedience can get him fired. Hence obedience
without knowing the nature of work or consequences of the
action is an ethical trap because of the fear.
This trap can be avoided by making sure that work is done
for the firm’s welfare and not because we need to please
authority. Authority can be asked for explanation as to why a
particular task has to be done. Blind faith on authority can be
avoided by strength to do always right and question when it is
wrong.
Ethical Trap and Avoidance Mechanism Eight: Alcohol
This trap involves taking alcohol to ensure all the bad
feelings are washed away. These feelings come become of doing
something bad and being intoxicated makes it forget
temporarily.
This trap can be avoided by knowing that alcohol is not a
permanent solution. It requires courage and integrity to accept
that something has been done which is not right and it will take
courage to accept and work on it to make sure such things are
not repeated.
Ethical Trap and Avoidance Mechanism Nine: Contempt for
Victim
This trap involves dehumanizing victims to make it easier to
harm them. When victims are looked as just numbers and
employees are looked upon as hired help, it gives the authority
power to look humans as just people.
This can be avoided by seeing people as human. This will
require inner strength and the ability to see the impact of the
harm we are going to do upon them. Keeping ourselves in their
5. position can give a reality check which can ensure they are
looked as humans only in future.
Ethical Trap and Avoidance Mechanism Ten: Competition
This trap involves competition between two or more people
or parties. Competition is a feeling of going ahead of other.
This competition gets so fierce sometimes which involves one
party or both party sometimes to break rules and move on to
unethical practices to harm other and get ahead of the other.
This trap can be avoided by having a mutual respect among
the parties. Both competitors can coexist only if they have
mutual respect. Competition should be done in a fair manner
and it is good competition if everything is done under rules and
regulations.
Ethical Trap and Avoidance Mechanism Eleven: We Won’t get
Caught
This trap related to the feeling of self-denial that anything
would happen to us if we are doing something wrong. Lack of
faith in justice or crooked system often enables people to get
this feeling that they are doing unethical practices in the safest
way possible and it won’t be traced back to them.
This can be avoided by always seeing that justice will
prevail sooner or later. Even if the justice fails to catch them,
they will be haunted by their inner conscience. Integrity and
honesty in the work will ensure that people always do the right
thing and they won’t ever get the feeling that we won’t be
caught since they would always know what they did.
Ethical Trap and Avoidance Mechanism Twelve: Anger
This trap involves covering up fear by showing hostility
towards anyone. Anger covers guilt but it also keeps people at
distance from the instance anger is showed on. Anger is a very
powerful emotion which can lead people to be aggressive in
nature.
This can be avoided by having sympathy and love towards
6. every person. When anger itself gets weak then a person will
have no fear in showing the guilt he committed towards the
person.
Ethical Trap and Avoidance Mechanism Thirteen: Small Steps
This trap involves committing unethical acts in small
steps. This makes the person committing the unethical acts
tolerant of the unethical nature of the steps. It becomes more
severe since a person gets accustomed to the unethicality of it
and he keeps raising the bar of the small step for himself.
This can be avoided in the initial stages itself. Every small
step should be seen as a sign of guilt and it should never be
accepted but should be dealt strongly to ensure the gravity
never increases.
Ethical Trap and Avoidance Mechanism Fourteen: Tyranny of
Goals
This trap asks for people to move fast in order to achieve
their goals. The goals should be achieved even if it means
cutting short on some of the goals to reach the prime goal. This
can involve taking short cut methods and unethical approach
just to finish the work.
This can be avoided by making sure that goals are
completed as they were desired. Nothing should get left behind
nor were any goals modified just to claim goal has been
reached. The quality and the desired form of the goal should be
untouched.
Ethical Trap and Avoidance Mechanism Fifteen: Don’t make
Waves
This trap deals with showing authority to keep everyone
quiet on the subject matter to avoid any suspicion or challenge.
This trap ensures that everyone stays quiet and there are no
measures to unearth a matter which is in suspicion for unethical
behavior.
This trap can be avoided by ensuring that everyone gets a
say in the matter. Meetings are regular and everyone is allowed
to speak their mind without the fear of getting reprimanded in
7. the later stage. Free thinking and the ability to accept any
wrong doing can ensure that such a trap is avoided.
A Controlled Study of Clicker-Assisted Memory Enhancement
in College Classrooms
AMY M. SHAPIRO1* and LEAMARIE T. GORDON2
1Psychology Department, University of Massachusetts
Dartmouth, Dartmouth, MA, USA
2Psychology Department, Tufts University, Medford, MA, USA
Summary: Personal response systems, commonly called
‘clickers’, are widely used in secondary and post-secondary
classrooms.
Although many studies show they enhance learning,
experimental findings are mixed, and methodological issues
limit their
conclusions. Moreover, prior work has not determined whether
clickers affect cognitive change or simply alert students to
information likely to be on tests. The present investigation used
a highly controlled methodology that removed subject and item
differences from the data to explore the effect of clicker
questions on memory for targeted facts in a live classroom and
to gain
a window on the cognitive processes affecting the outcome. We
found that in-class clicker questions given in a university
psychology class augmented performance on delayed exam
9. experiment was designed to answer two questions. Specifi-
cally, does clicker use promote learning in the classroom?
If so, do the observed improvements reflect true cognitive
change or are the enhancements simply a reflection of greater
emphasis placed on clicker-targeted information? To explain
the motivation behind this work, the following section will
briefly review the literature on clicker-assisted learning and
methodological concerns that may limit any conclusions that
can be made. A discussion of cognitive mechanisms that
may explain clicker effects will provide a foundation for
the specific research questions addressed by the study.
CLICKER-ASSISTED LEARNING OUTCOMES
Many studies employing indirect measures of learning have
reported positive effects of clickers, such as class participation
(Draper & Brown, 2004; Stowell & Nelson, 2007; Trees &
Jackson, 2007) and perceptions of learning (Hatch, Jensen, &
Moore, 2005), across various disciplines. Yet others report no
effect of clickers on indirect measures such as attendance,
engagement, or attentiveness (e.g. Morling, McAuliffe, Cohen,
& DiLorenzo, 2008). Such varied results exemplify the array of
findings within clicker literature.
More relevant to the present study, investigations employing
direct learning measures have also yielded somewhat mixed
results. Stowell and Nelson (2007) gave laboratory subjects a
simulated introductory psychology lecture and compared test
performance between groups asked to either use clickers or
do other sorts of participative activities during the lecture. They
found no differences between groups on learning outcome
measures. Kennedy and Cutts (2005), however, observed some
clicker effects but found that the strength of the relationship
between clicker use and learning outcome measures hinged
on how successful students were in answering the clicker
questions. Despite such discouraging reports, the majority of
11. class. Of course, class differences could account for some of
the effect. Evidence against group differences was provided
by a set of questions targeted as controls, for which neither
class saw clicker questions. Performance on those items
differed by just under 3% between classes. Although error
due to differences between items chosen for control and test
conditions may have been a factor, the data do point to an
effect of clickers on targeted exam question performance, but
not on untargeted question performance.
Using a similar methodology, Mayer et al. (2009) also
evaluated clicker-assisted learning. In addition to clicker
and control classes, they used a third no-clicker class, which
was given questions on paper to answer at the end of each
class rather than using clickers. Like Shapiro (2009), they
targeted specific exam questions with in-class questions in
both clicker and no-clicker classes. In the primary analysis,
they evaluated overall exam performance, including exam
questions that were directly targeted by clicker questions
(similar items) and those that were not (dissimilar items).
When the total exam score was used as the dependent
measure, students using clickers performed better compared
with those not using clickers. Mayer et al. also conducted a
secondary analysis, however, comparing student perfor-
mance on ‘similar’ versus ‘dissimilar’ exam questions (see
their Table 2). They reported no significant performance
differences on ‘similar’ test questions between the clicker and
no-clicker classes and the control classes. The clicker class,
however, performed significantly better on ‘dissimilar’ items
than the other two classes. It appears, then, that the overall
effect of clickers found in the primary analysis stemmed from
the dissimilar items. Although Mayer et al. and Shapiro both
found that clicker use improved test performance, their
findings do not cohere. Mayer et al. found a positive effect of
clicker questions on untargeted (dissimilar) test items but not
targeted (similar) items, whereas Shapiro found performance
12. enhancement only on targeted (similar) questions.
The literature on clicker-assisted learning is very inconsis-
tent in its findings. What may be at the root of differential
results among so many studies? A number of important
methodological issues within studies reporting clicker effects
may be the answer. One source may be class or instructor
differences within studies that have compared clicker-adopting
classes with non-adopting classes, as these studies are
vulnerable to the error introduced by individual and group
differences. Item differences between clicker-targeted and
control test items also may be problematic, as lack of counter-
balancing between conditions also introduces error. Moreover,
lack of standardization or control regarding the strength of the
relationships between clicker questions and test questions
creates a potential for variability in strength of the treatment
between items in a study, thus threatening internal validity.
Finally, student motivation varies between studies, as students
may be enticed to participate through varied means such as
extra credit, graded tests, or laboratory credit.
Without access to multiple sites or classes to create a
cluster randomized design (Raudenbush, 1997), it is difficult
to conduct a true experiment in a natural classroom. How-
ever, the present investigation combined a within-subjects
and within-items design that controls variability from both
factors while retaining ecological validity. We know of
no other study of clicker-assisted learning to use such a
design in a study of content learning (but see Stowell, Oldham,
& Bennett, 2010, and Roediger, Agarwal, McDaniel, &
McDermott, 2011). In addition to providing greater control,
the within-subjects design offers a strong test of clicker effects
on targeted material, as clickers will be present in the class-
room throughout the experiment. In this way, the experiment
is not a simple ‘clicker classroom versus no-clicker classroom’
13. study. Instead, all subjects were exposed to clickers, and the
dependent variable measured the effect of clicker questions
on the acquisition of specific concepts targeted by clicker
questions. If clicker effects are still detected under these
conditions, the study will have found strong evidence for
clicker-enhanced learning of targeted content.
COGNITION AND CLICKERS
As elucidated in the previous section, many researchers have
made claims about the positive effect of clicker technology
on learning and memory. It is possible, however, that clicker
questions merely highlight important ideas for students. In
other words, the effect may come about by prompting
students to direct attention resources to specific items during
class and in subsequent study. Attention is a necessary first
step in creating a memory, so anything that increases
attention holds the possibility of enhancing memory. A
savvy student should be able to glean from in-class questions
the information deemed important by the instructor. It would
make sense to direct study efforts toward those topics. If this
sort of attention grabbing is at the root of the learning
enhancements observed in some clicker studies, the effects
are not particularly interesting from a cognitive or theoretical
point of view. It would also bring into question whether the
effort required to generate clicker questions, not to mention
the expense of the hardware to students, is worthwhile. After
all, it might be just as effective to give students lists of
important topics to attend to in class and during study.
A second and more theoretically interesting possibility,
one that would support the use of clickers as a means of
affecting cognitive change, is that clicker-induced retrieval
acts as a source of memory encoding. Known as the testing
effect, Karpicke, Roediger, and others have documented
that the act of retrieving information from memory can
15. Although transfer-appropriate processing is one reason-
able hypothesis about the testing effect, there is evidence to
support another. Specifically, there is evidence that the
process of retrieval itself strengthens or otherwise alters the
memory trace, a possibility proposed by Bjork (1975). With
that idea in mind, Kang, McDermott, and Roediger (2007)
theorized that, if retrieval is a factor, a more demanding
retrieval task should produce stronger testing effects than a
simpler task. They found that, as long as feedback was
offered during initial tests, short-answer tests improved
performance on a final test better than the multiple-choice
or control conditions. A similar effect was reported by
McDaniel, Anderson, Derbish, and Morrisette (2007), who
also found that a more demanding, short-answer test showed
the greatest learning improvement.
Results of Kang et al. (2007) support Bjork’s (1975) notion
that the act of retrieval may strengthen the memory trace.
However, they also point to the importance of feedback in
the testing effect, a topic that has been much studied in the
literature. Overall, empirical research has demonstrated that
feedback has a generally positive effect on learning outcomes
(e.g. Butler, Karpicke, & Roediger, 2007; Pashler, Cepeda,
Wixted, & Rohrer, 2005; Sassenrath & Gaverick, 1965). Feed-
back as an explanatory mechanism for the testing effect is very
relevant to the exploration of clicker effects because many
instructors offer immediate feedback to clicker responses
by projecting graphs of class polling results. It is important
to note that the testing effect has been demonstrated in many
experiments not employing feedback (Kang et al., 2007,
Experiment 1; Marsh, Agarwal, & Roediger, 2009; Roediger
& Karpicke, 2006a, 2006b), so regardless of the factors
contributing to feedback effects, some other mechanism
unique to testing appears to be working either alongside or
integrated with feedback.
16. GOALS OF THE PRESENT STUDY
The published literature on clicker learning effects is troubled
by methodological issues that impede clear understanding of
the technology’s effect on learning and memory. Thus, the first
goal of the present study was to employ a methodology that
controlled subject and item differences. Toward this end, a
series of clicker questions were written for targeted exam
questions that were offered in two college-level clicker-based
classrooms, which provided ecologically valid conditions
under which to examine clicker effects. Half the questions
served as control items in one class and as clicker-targeted
items in the other. In this way, all subjects and all items served
in both conditions, thus eliminating any error introduced by
possible item and subject differences. It is important to note
that the present study was not designed as a general investiga-
tion of ‘clickers versus no clickers’ in the classroom. Rather, it
was aimed at examining the effect of clicker questions on
acquisition of the specific information they target within a
clicker classroom. If clicker effects stem from a general effect
of questioning in class, there should be no difference between
clicker and control conditions in the present study. In this way,
the present study is a strong test of clicker effects, as the
within-subjects design biases the results against the study’s
main hypothesis if clicker effects are general rather than
specific to the targeted information.
In addition to exploring the learning effects of clickers, a
second aim of the experiment was to rule out the possibility
that clickers work by alerting students to the content of
future exam questions. Thus, we also compared performance
on test items targeted with clicker questions with perfor-
mance on the same items when students were told the
information would be on the test. If clickers work by
17. invoking the testing effect rather than alerting students to
important information, performance on clicker-targeted exam
questions should be equal to or better than performance
on the same items when attention ‘flags’ are given. If the
attention-grabbing hypothesis can be ruled out, the testing
effect will be the most reasonable explanation for clicker
effects. If cognitive change due to the testing effect can be
identified as the source of clicker effects on test performance,
it would mean that clicker technology offers a true learning
advantage rather than mere study prompts. Such a result
would be important to understanding the cognition
underlying clicker use and pedagogical practice.
METHOD
The experiment was designed to test two distinct hypotheses.
The first was that in-class clicker questions would have a
positive effect on students’ ability to remember factual
information and answer delayed exam questions on the same
topic. If Hypothesis 1 is correct, items targeted by clicker
questions will be answered correctly more often than when
they are not targeted by clicker questions. This result will
also serve as an important validity check of our methodology.
Because we used a within-subjects design, there is the possibil-
ity that the presence of clickers in the classroom will boost
performance on non-targeted items. A significant difference
between the clicker and simple control conditions will demon-
strate that the presence of the clickers did not contaminate the
control condition.
The second hypothesis was that clicker-mediated perfor-
mance improvement is due to directing students’ attention to
the relevant material, thus flagging certain information as
important and likely to be on the exams. Hypothesis 2 leads
to the prediction that targeting exam questions with alerts will
not increase exam performance more than targeting the same
19. board approval was sought prior to beginning the study,
and a waiver was granted.
Materials
The class covered 11 topics in general psychology, with a
chapter assigned for each in Discovering Psychology
(Hockenbury & Hockenbury, 2007). The class met 3 days a
week for 50 minutes over 15 weeks and was taught as a
typical lecture course with some videos, interactive activities,
and participation integrated into many of the lectures. All
lectures were accompanied by a PowerPoint presentation that
projected main points and illustrations onto a large screen.
The slides were projected with an Apple MacBook Pro com-
puter and a digital projection system. In-class clicker questions
were integrated into the PowerPoint presentations, with
individual slides dedicated to single questions. The iClicker
system was used to allow students to make their responses to
clicker questions. Students were required to purchase their
clickers along with their textbooks. The iClicker Company
supplies the receiver and software at no cost to adopting
instructors.
The exams in this class were not cumulative, each covering
only the assigned material since the previous test. Four exam
items from each course topic (44 exam items), spread across
four different tests during the semester, were chosen as targets
for the experiment. Performance on these items was the
dependent variable. A multiple-choice clicker question was
written for each exam question, all of which were also multiple
choice. All clicker and exam questions used for the study were
factual, asking only about basic, declarative information
presented in class. Appendix A provides two sample clicker–
exam question pairs. All targeted exam questions were
included on the exams for each of the classes participating in
the study.
20. Two independent content experts provided validation
ratings of the stimuli. Both were professors of psychology
that routinely taught introductory psychology. They were
presented with each clicker and exam question and asked
to rate them on a 7-point scale for the following dimensions:
(i) overall quality of the question, (ii) relevance of the
information targeted by the clicker–exam item pairs to the
content and goals of an introductory psychology course;
and (iii) the relationship between each clicker item and each
exam question. For each index, higher ratings indicated
better-quality questions, greater relevance to the course
aims, and a greater relationship between clicker and exam
items, respectively.
A cutoff mean of 4.5 was set for the quality and relevance
scores. Any question or clicker–exam question pair that did
not achieve a mean rating of 4.5 on all these dimensions
was not used in the study. The mean overall quality rating
for the clicker and exam questions used in the study was
6.11 and 6.09, respectively. The range of mean scores was
5.0–6.5 for the clicker questions and 5.5–7.0 for the exam
questions. The mean relevance of the material to the course
was 6.36, with a range of 4.5–7.0.
To establish the strength of the relationship between
clicker and exam question pairs, the raters were asked to
indicate the extent to which correctly answering each clicker
question required retrieval of the same information from
memory as each exam question. This was performed for
two reasons. The first was to validate each clicker question
as a reasonable test of the same knowledge as its intended
exam target. Thus, a high rating established that the clicker
questions were directly accessing the memory relevant to
their respective exam questions. The second reason was to
22. at an appropriate time during lecture. Clicker questions were
offered during lecture at varying time intervals that were not
predictable to students. They were offered after a topic was
covered and only after the instructor both solicited and
answered any questions from the class. Anywhere from
one to five clicker questions were asked on any given day
in class. Some of these were not experimental clicker items
but were used as ‘filler’ questions to provide sufficient credit
for students. The percent of correctly answered clicker
questions over the course of the semester was calculated as
roughly 14% of the final grade.
One set of 20 items was chosen to test Hypothesis 1,
regarding the learning effects of clickers. A separate set
of 20 items was chosen to address Hypothesis 2, regarding
the cognition underlying clicker effects. Regardless of which
hypothesis was being tested, the clicker questions were
offered in the same way, as previously described. Within each
subset of 20 clicker–exam question pairs created for the
experiment, 10 were assigned to the clicker condition in one
class and to the control condition in the other class. The oppo-
site assignment was made for the other 10 items. Thus, each
subject and item contributed equally to both conditions.
Whereas the procedure for presenting clicker questions
was identical across item sets used to test each hypothesis,
the procedure used to create the control conditions differed.
In the case of Hypothesis 1, the information relevant to the
control item was simply presented as part of the class lecture,
with the information included on a PowerPoint slide. Thus,
the conditions were merely set up to compare learning when
clickers are used or not. For Hypothesis 1, then, the condi-
tions will be referred to as the clicker1 and simple control
23. conditions. For the second hypothesis, the experimental
and no-clicker control conditions will be referred to as the
clicker2 and attention-grabbing conditions, respectively.
When the information necessary to answer an exam question
targeted as an attention-grabbing item was presented in class,
it was highlighted on the projected slide. The instructor’s
remote was used to turn the font red and pulse the text. In
addition, the instructor announced, ‘This information is very
important. It is likely to be on your test.’ These attention-
grabbing ‘flags’ were offered either just before or during
the presentation of the relevant information.
Students were allowed 40–90 seconds to answer each
question, depending upon how long the question was. When
a question was projected, a timer also appeared on the
screen, thus making students aware of the time limit. After
students had submitted their responses, a bar chart showing
the percentage of the class to respond with each option was
projected onto the screen, and the instructor highlighted the
correct answer in red by clicking on the bar. In this way,
students received feedback about their responses to each
question. If less than 90% of the class correctly answered
an item, the instructor explained the correct answer, whether
students posed questions or not. On all but a few of the
clicker items used in the study, however, students scored
90% or higher and asked no questions after seeing the
correct answer.
An in-class survey was also given to students 1 week
before the end of the semester. The survey was designed to
solicit students’ conscious impressions of factors affecting
their memory and study strategies. The survey was adminis-
tered by projecting the questions onto the screen during
class. Each question was projected individually, and the
instructor read each aloud. Students were asked to indicate
24. a response to each question using a 5-point Likert scale with
their clickers. Students were given 15 seconds to respond to
each question. Specifically, students were asked how much
the in-class questions, the highlighted information on the
PowerPoint slides, and instructor emphasis affected their
choices about what to study. They were also asked to rate
how much each of those factors enhanced their learning
and memory of class material. None of the class results
were projected to the class or reported to them before the
last test.
RESULTS AND DISCUSSION
Students who attended fewer than 60% of the classes over
the semester were excluded from the analysis, as their
exposure to the independent variable was considered too
low to reflect accurately the effect of the intervention. Like-
wise, students that missed more than one exam were also
excluded, as these students were missing at least half the
data. A total of 226 subjects were included in the analysis.
As a check of the equivalence of the independent and
dependent variables used to test the two hypotheses, a paired
t-test was conducted to compare subjects’ performance on
the 20 exam questions used to test Hypothesis 1 when
assigned to the clicker1 condition with the second set of 20
used to test Hypothesis 2 when assigned to the clicker2
condition. Students scored a mean of 69.8 (SD = 17.9) on
the clicker1 items and 72.1 (SD = 17.8) on the clicker2
items. The difference was non-significant in a paired t-test,
t(225) = 1.62, p > .05. An unpaired t-test was conducted to
compare means when calculated by items. Those in the
clicker1 condition were correctly answered by a mean of
68.6% (SD = 17.8) of students, and those in the clicker2
condition by 72.5% (SD = 17.8) of students. The difference
was non-significant, t(38) = 0.74, p > .05.
26. significant when analyzed by items, t(19) = 3.46, p < .01,
d = 0.77. For exam items in the no-clicker control condition,
a mean of 62.2% of students answered correctly. When the
same items were used in the clicker condition, a mean of
68.6% of students answered correctly. The 6.4-point differ-
ence represents a 10.3% performance increase on exam
items when in-class clicker questions were asked about
relevant content.
These results strongly support the conclusion that asking
students factual, multiple-choice questions enhances mem-
ory for the relevant information on delayed, factual test
questions. They suggest that the technology may be taking
advantage of the testing effect in the classroom. The magni-
tude of the observed effect is not unprecedented, as prior
studies have shown that a single testing episode in advance
of a final test has been shown to enhance learning by even
greater amounts (see Roediger & Karpicke, 2006b for a
review). As any reasonable critic would rightly point out,
however, it may be the case that clicker questions do not
strengthen memory traces or connections leading to them.
Rather than affecting true cognitive change, the questions
may merely cue students that the instructor deems certain
pieces of information to be of particular importance. If so,
it would certainly be reasonable for students to focus more
on that information during study, thus augmenting perfor-
mance on test items targeting that information. Analysis of
the second stimulus set and the survey results addresses
that issue.
Hypothesis 2: Clicker questions improve learning by
alerting students to important material
The attention-grabbing hypothesis was not supported by
the comparison of the clicker2 and attention-grabbing condi-
tions. When information was highlighted on class slides and
27. students were told it was important and would be included
on the test (the attention-grabbing condition), students
correctly answered an average of 70.1% (SD = 17.8) of
the targeted exam questions. When they were not told the
material was of particular importance but were given clicker
questions about the material (the clicker2 condition), they
correctly answered 72.1% (SD = 17.2). The difference was
not statistically significant, t(225) = 1.33, p > .05. Analyzed
by items, an average of 68.7% students correctly answered
targeted exam questions when they were in the attention-
grabbing condition and 72.5% correctly answered the same
items when assigned to the clicker2 condition. The difference
just reached significance and had a medium effect size,
t(19) = 2.06, p = .05, d = 0.46. In short, offering a clicker
question improved performance on delayed exam questions
as well or better than explicitly telling students that the
information would be on the test.
Class survey
Unpaired t-tests comparing class responses to the survey
questions indicated no significant differences between clas-
ses with respect to how they answered any of the survey
questions. As such, all of the data for both classes were
combined for the analysis. To elicit students’ can did
responses, the surveys were anonymous. As such, it was not
possible to identify the students that attended fewer than 60%
of classes or missed more than one test. Thus, the survey results
represent the entire class, rather than the subset of students
used for the study.
A repeated-measures analysis of variance with a Green-
house–Geisser correction comparing students’ responses
with the questions probing how much the clicker questions,
professor emphasis, and slide emphasis helped them to
learn the material was significant, F(1.77, 476.38) = 68.409,
p < .001, �2Partial = .20. Students reported that answering the
28. clicker questions was slightly less than moderately helpful
in learning the material, as the average rating was 2.84 on
a 1–5 scale. The means were 3.68 and 3.39 for professor
and slide emphasis, respectively. Pairwise comparisons
using the Bonferroni correction indicated that students felt
that the clicker questions had significantly less impact
on learning class material than both the slide emphasis
(p < .01) and the instructor’s verbal remarks (p < .01). The
difference between slide and instructor emphasis was also
significant, p < .01.
Survey questions also probed students for information
about what guided their decisions about what to study. If
clicker questions were effective because they drew students’
attention to material to be tested, one would expect that
students would have used that information to direct their
study efforts. Student responses, however, do not indicate
that clicker questions were highly influential, as they rated
their impact on study choices with a moderate mean of
3.04. Students rated the professor’s verbal remarks and
highlighted information on the slides much higher (4.28 and
3.86, respectively) than the clicker questions. A repeated-
measures analysis of variance with a Greenhouse–Geisser
correction indicated that the differences were significant,
F(1.76, 481.34) = 184.012, p < .001, �2Partial = .40. Again,
pair-
wise comparisons using the Bonferroni correction indicated
significant differences between ratings for the clicker and slide
emphasis, clicker and instructor emphasis, and slide and
instructor emphasis, all at p < .01.
In sum, the results of the Hypothesis 1 analysis demon-
strate that clicker technology is an effective classroom
learning tool. Performance on delayed, targeted exam ques-
tions increased significantly when the information was tested
in class shortly after learning the material. The test of
30. and within-subjects design, while still conducting the study
in live classrooms, the experiment was designed to tighten
experimental control while maximizing ecological validity.
Because the design was within subjects and students were
using clickers in the same lectures in which they were
exposed to the control condition content, the significant
performance difference between clicker and control items
indicates a strong effect of clicker questions on targeted
information acquisition. The second goal was to provide
evidence about the cognition underlying clicker-assisted
learning effects. The experiment demonstrated that clickers
are effective pedagogical tools. Performance on delayed
exam questions increased significantly when the information
was targeted by in-class clicker questions. It also revealed
that clicker questions were equally or more effective than
cuing students about the information being on a future exam.
The results support a role of the testing effect in clicker-
assisted learning; however, the equivalent performance of
the clicker and attention-grabbing groups in the subject
analysis of Hypothesis 2 does not completely rule out the
role of attention grabbing in clicker effects. It is likely that
the testing effect is working in tandem with attention grab-
bing and perhaps some increased study of clicker-targeted
information. The data trend seen in the means of that
analysis, however, is in a direction opposite to what the
attention-grabbing hypothesis predicts. Moreover, the analy-
sis by items indicated a significant advantage of clicker
questions over alerts, with a moderate effect size, although
the survey results indicated students actually studied the
information in the alert condition more than the clicker
condition. The latter point is remarkable because it reveals
that students performed better on the very questions they
reported attending to less during study (i.e. the clicker-
targeted items). Further, the clicker questions were only
offered after information was presented in class, so they
31. could not have served to increase attention during lecture.
The attention alerts, however, were often given before or in
the middle of explanations, so attention actually should have
been greater in the attention condition. On balance, the
weight of evidence cannot rule out a role of attention
grabbing in clicker effects, so the ability of clicker questions
to ‘flag’ information should be further explored in future
studies.
To whatever degree attention grabbing is at play in clicker
effects, there seems to be something about the actual act of
answering clicker questions (apart from attention grabbing)
that enhances memory for lecture content. One possible
mechanism through which answering clicker questions may
enhance memory for class material is repetition. That is,
clicker questions may merely offer multiple exposures to
the information. After the information is provided in class,
the clicker questions serve as a second exposure, thus
enhancing the strength of memory for the material. However,
the magnitude of the improvement seen in the clicker1 versus
simple control analysis (10–13%) is hard to explain by a
single re-exposure to the material during class. Perhaps
students studied clicker-targeted material more, thus increas-
ing exposure to the material outside of class. If so, the results
might be attributable to repetition effects, after all. The
survey results, however, indicated that the alerts were more
influential than clicker questions in directing students’ study
efforts. Given that students’ self-reports indicate that they
spent significantly more time studying the information that
was highlighted in class than the information targeted by
the clicker questions, one would expect greater repetition
and learning in the attention-grabbing condition as opposed
to the clicker2 condition. Because performance on items
assigned to the clicker2 condition was better than that on
items assigned to the attention condition, that possibility is
33. Psychol. 26: 635–643 (2012)
One limitation of the study is that the measure of students’
study emphasis was a self-report, which is less reliable than a
direct measure. Although future studies may examine that
variable using a different methodology, the narrow focus of
the present work was to control as much error as possible
in the sample and in the stimuli to determine whether clicker
questions enhance retention of targeted material. The present
design offers a rigorous test of that hypothesis. Also, it
would have been ideal to use the same items to test each
hypothesis and fully counterbalance them between the
clicker, control, and attention-grabbing conditions. It is a
limitation of the study that separate items were used to test
each hypothesis, thus preventing direct comparisons between
their respective items. The decision was made to create
separate stimulus sets for each hypothesis because there were
only two classes available for the study. As such, it was not
possible to fully counterbalance test items between all three
conditions (clicker, no clicker, and attention grabbing), and
the tight control attained through full counterbalancing was a
crucial methodological issue in this experiment. The current
design, however, still allowed the important comparisons
necessary to address Hypotheses 1 and 2. The only compari-
son that could not be made while simultaneously controlling
item differences was between the simple control and atten-
tion-grabbing items. Because that comparison would not
inform the aims of the study, it was seen as a reasonable
compromise. The differences between the attention and simple
control groups were, in fact, rather robust in the subject
analysis and in the predicted direction in both analyses
(70.1% vs 61.4% in the subject analysis and 68.7% vs 62.2%
in the item analysis, respectively), suggesting that the
attention-grabbing manipulation was indeed effective at
34. promoting attention and study of certain facts. The demon-
strated equivalence between questions used in the clicker1
and clicker2 conditions supports the validity of the differences
between the attention and simple control groups and thus the
validity of the attention-grabbing manipulation.
Another limitation was the narrow focus of the investiga-
tion necessitated by the within-subjects design, as no clicker-
free control condition could be included in the study.
Without a comparison group that used no clickers at all,
the present results cannot determine whether the benefits of
clickers also extended to some degree to the untargeted test
questions. It is certainly possible that untargeted question
performance was also boosted by clicker use, but just to a
lesser extent than the targeted questions. Indeed, some
studies have shown an effect of clicker use on untargeted
material (e.g. Mayer et al., 2009). Finally, the present study
examined only one aspect of learning, fact retention. It did
not examine the effect of clicker questions on the develop-
ment of conceptual understanding, problem solving, critical
thinking, or other aspects of learning. It will be important
for future studies to weigh the benefits of clickers in
those areas.
From the point of view of practice, the data offer encourag-
ing news to educators, particularly those teaching large groups
of students. The data suggest that although some attention
grabbing may contribute to the observed benefits of clickers,
the questions are also affecting real cognitive change in the
classroom, thus offering real learning advantage to students.
With teacher investment of just a few minutes to incorporate
a clicker question into a presentation and a minute or so of
class time to present, class performance on delayed exam items
can be significantly and meaningfully increased. In the present
study, the clicker questions were associated with a perfor-
35. mance increase of roughly 10–13%, which seems to be a good
return on investment. The technology has its limits, as only so
many questions can reasonably be asked in a single class
meeting, but the evidence strongly suggests that clickers are
a profitable investment for teachers and students.
REFERENCES
Agarwal, P. K., Karpicke, J. D., Kang, S. K., Roediger, H. L., &
McDermott, K. B. (2008). Examining the testing effect with
open- and
closed-book tests. Applied Cognitive Psychology, 22, 861–876.
DOI:10.1002/acp.1391
Allen, G. A., Mahler, W. A., & Estes, W. K. (1969). Effects of
recall tests on
long-term retention of paired associates. Journal of Verbal
Learning &
Verbal Behavior, 8(4), 463–470. DOI:10.1016/S0022-
5371(69)80090-3
Baker, F. (2001). The basics of item response theory. College
Park, MD: ERIC
Clearinghouse on Assessment and Evaluation, University of
Maryland.
Beekes, W. (2006). The “millionaire” method for encouraging
participation.
Active Learning in Higher Education: The Journal of the
Institute for
Learning and Teaching, 7, 25–36.
Bjork, R. A. (1975). Retrieval as a memory modifier: An
interpretation of
negative recency and related phenomena. In R. L. Solso (Ed.),
Information
36. processing and cognition: The Loyola symposium (pp. 123–
144). Hillsdale,
NJ: Lawrence Erlbaum Associates, Inc.
Bjork, R. A. (1999). Assessing our own competence: Heuristics
and illusions.
In D. Gopher, & A. Koriat (Eds.), Attention and performance
XVII:
Cognitive regulation of performance: Interaction of theory and
application
(pp. 435–459). Cambridge, MA: MIT Press.
Blaxton, T. A. (1989). Investigating dissociations among
memory measures:
Support for a transfer appropriate processing framework.
Journal of
Experimental Psychology: Learning, Memory, and Cognition,
10, 3–9.
DOI: 10.1037/0278-7393.15.4.657
Brickman, P. (2006). The case of the druid Dracula: A directed
“clicker”
case study on DNA fingerprinting. Journal of College Science
Teaching,
36(2), 48–53.
Butler, A. E., Karpicke, J. D., & Roediger, H. L. (2007). The
effect of type
and timing of feedback on learning from multiple-choice tests.
Journal of
Experimental Psychology. Applied, 13, 273–281. DOI:
10.1037/1076-
898X.13.4.273
Carrier, M., & Pashler, H. (1992). The influence of retrieval on
retention.
37. Memory & Cognition, 20(6), 633–642.
Cleary, A. (2008). Using wireless response systems to replicate
behavioral
research findings in the classroom. Teaching of Psychology, 35,
42–44.
DOI: 10.1080/00986280701826642
Draper, S., & Brown, M. (2004). Increasing interactivity in
lectures using an
electronic voting system. Journal of Computer Assisted
Learning, 20,
81–94. DOI: 10.1111/j.1365-2729.2004.00074.x
Duchastel, P. C. (1981). Retention of prose following testing
with different
types of tests. Contemporary Educational Psychology, 6, 217–
226. DOI:
10.1016/0361-476X(81)90002-3
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K.
A., Hendel,
R. A., Epstein, B. B., & Brosvic, G. M. (2002). Immediate
feedback
assessment technique promotes learning and corrects inaccurate
first
responses. Psychological Record, 52(2), 187–201.
Glover, J. A. (1989). The “testing” phenomenon: Not gone but
nearly
forgotten. Journal of Educational Psychology, 81, 392–299.
DOI:
10.1037/0022-0663.81.3.392
Hatch, J., Jensen, M., & Moore, R. (2005). Manna from heaven
or “clickers”
39. Karpicke, J. D., & Roediger, H. L. (2007b). Expanding retrieval
practice pro-
motes short-term retention, but equally spaced retrieval
enhances long-term
retention. Journal of Experimental Psychology: Learning,
Memory, and
Cognition, 33, 704–719. DOI: 10.1037/0278-7393.33.4.704
Karpicke, J. D., & Roediger, H. L. (2008). The critical
importance of retrieval
for learning. Science, 319, 966–968. DOI:
10.1126/science.1152408
Kennedy, G. E., & Cutts, Q. I. (2005). The association between
students’
use of an electronic voting system and their learning outcomes.
Journal
of Computer Assisted Learning, 21, 260–268. DOI:
10.1111/j.1365-
2729.2005.00133.x
Koriat, A. (1993). How do we know that we know? The
accessibility model
of the feeling of knowing. Psychological Review, 100, 609–639.
DOI:
10.1037/0033-295X.100.4.609
Koriat, A., & Bjork, R. A. (2005). Illusions of competence in
monitoring one’s
knowledge during study. Journal of Experimental Psychology:
Learning,
Memory, and Cognition, 31, 187–194. DOI: 10.1037/0278-
7393.31.2.187
Marsh, E. J., Agarwal, P. K., & Roediger, H. L. (2009).
40. Memorial consequences
of answering SAT II questions. Journal of Experimental
Psychology.
Applied, 15, 1–11. DOI: 10.1037/a0014721
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B.,
Chun, D.,
Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009).
Clickers in
college classrooms: Fostering learning with questioning
methods in large
lecture classes. Contemporary Educational Psychology, 34, 51–
57. DOI:
10.1016/j.cedpsych.2008.04.002
McDaniel, M., Anderson, J., Derbish, M., & Morrisette, N.
(2007). Testing the
testing effect in the classroom. European Journal of Cognitive
Psychology,
19, 494–513. DOI: 10.1080/09541440701326154
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. (2008).
Efficacy of per-
sonal response systems (“clickers”) in large, introductory
psychology classes.
Teaching of Psychology, 35, 45–50. DOI:
10.1080/00986280701818516
Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of
processing
versus transfer appropriate processing. Journal of Verbal
Learning and
Verbal Behavior, 16, 519–533. DOI: 10.1016/S0022-
5371(77)80016-9
Nungester, R. J., & Duchastel, P. C. (1982). Testing versus
41. review: Effects
on retention. Journal of Educational Psychology, 74, 18–22.
DOI:
10.1037/0022-0663.74.1.18
Pashler, H., Cepeda, N. J., Wixted, J. T., & Rohrer, D. (2005).
When
does feedback facilitate learning of words? Journal of
Experimental
Psychology: Learning, Memory, and Cognition, 31(1), 3–8.
Poirier, C. R., & Feldman, R. S. (2007). Promoting active
learning using
individual response technology in large introductory psychology
classes.
Teaching of Psychology, 34(3), 194–196.
Raudenbush, S. W. (1997). Statistical analysis and optimal
design for cluster
randomized trials. Psychological Methods, 2, 173–185. DOI:
10.1037/
1082-989X.2.2.173
Ribbens, E. (2007). Why I like clicker personal response
systems. Journal of
College Science Teaching, 37(2), 60–62.
Roediger, H. L., & Karpicke, J. D. (2006a). Test-enhanced
learning: Taking
memory tests improves long-term retention. Psychological
Science, 17,
249–255. DOI: 10.1111/j.1467-9280.2006.01693.x
Roediger, H. L., & Karpicke, J. D. (2006b). The power of
testing memory: Basic
research and implications for educational practice. Perspectives
42. on Psycho-
logical Science, 1, 181–210. DOI: 10.1111/j.1745-
6916.2006.00012.x
Roediger,H., Agarwal, P., McDaniel, M., & McDermott, K.
(2011). Test-enhanced
learning in the classroom: Long-term improvements from
quizzing. Journal of
Experimental Psychology. Applied, 17, 382–395. DOI:
10.1037/a0026252
Sassenrath, J., & Garverick, C. (1965). Effects of differential
feedback
from examinations on retention and transfer. Journal of
Educational
Psychology, 56, 259–263.
Shapiro, A. M. (2009). An empirical study of personal response
technology
for improving attendance and learning in a large class. Journal
of the
Scholarship of Teaching and Learning, 9(1), 13–26.
Shih, M., Rogers, R., Hart, D., Phillis, R., & Lavoie, N. (2008).
Community
of practice: The use of personal response system technology in
large
lectures. Paper presented at the University of Massachusetts
Conference
on Information Technology, Boxborough, MA.
Stowell, J., & Nelson, J. (2007). Benefits of electronic audience
response
systems on student participation, learning, and emotion.
Teaching of
Psychology, 34, 253–258. DOI: 10.1080/00986280701700391
43. Stowell, J. R., Oldham, T., & Bennett, D. (2010). Using student
response
systems (“clickers”) to combat conformity and shyness.
Teaching of
Psychology, 37, 135–140. DOI: 10.1080/00986281003626631
Szpunar, K. K., McDermott, K. B., & Roediger, H. L. (2008).
Testing during
study insulates against the buildup of proactive interference.
Journal of
Experimental Psychology: Learning, Memory, and Cognition,
34, 1392–1399.
DOI: 10.1037/a0013082
Trees, A., & Jackson, M. (2007). The learning environment in
clicker
classrooms: Students processes of learning and involvement in
large
university-level courses using student response systems.
Learning, Media
and Technology, 32, 21–40. DOI: 10.1080/17439880601141179
Tulving, E. (1967). The effects of presentation and recall of
material in free-
recall verbal learning. Journal of Verbal Learning and Verbal
Behavior,
6, 175–184. DOI: 10.1016/S0022-5371(67)80092-6
Van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997).
Handbook of
modern item response theory. New York: Springer.
APPENDIX A
SAMPLE CLICKER–EXAM QUESTION PAIRS.
44. Clicker question Exam question
Sample 1. Which of the following
is true about punishment?
A. Punishment is most effective if
it always immediately follows
the behavior.
B. Punishment works by reducing
an undesired behavior.
C. Punishment can be ineffective
if a big enough reward can be had
by producing the behavior in
question.
D. All of the above.
Punishment is most effective
if:A. it immediately precedes
the operant.
B. it consistently follows the
operant.
C. it occasionally follows the
operant.
D. there is considerable delay
between the operant and the
punishment.
Sample 2. The major difference
between a primary and secondary
reinforcer is that primary
reinforcers are naturally satisfying
while a secondary reinforcer
A. is something we learn to like.
B. is usually an indirect form of a
primary reinforcer.
C. Both A and B
D. None of the above.
46. Step 1 - Book Search
1. [Instructions] Using Woodbury Library Catalog
(library.woodbury.edu), search for the book assigned to you.
Above you will see a number to the left of your name, locate
that number on the Excel spreadsheet in Moodle to find your
assigned book. Once you have found your book in the Catalog,
select the dropdown menu from “Libraries to search” and select
Woodbury University Library. Once you have located (found)
the book in the Catalog, click on the title of the book.
2. Name of the book (Once upon a car by Vlasic).
3. Please answer the following questions:
a. What is the full title of the book you found in the Catalog:
b. Names of author(s)/editor(s):
c. Publisher:
d. Copyright (year published):
e. Number of Pages:
f. Place of publication (city/state):
g. What is the OCLC Number:
h. How many (what is the number of) related Subject Words that
have been applied to this item (you can find the number of
subject words at the bottom of the record):
i. What is the Location of this book:
j. What is the Status of the book:
k. What is the full Call Number of the book (example: ND511.5
.K55 A618 2012):
4. Create a proper APA citation:
47. 5. Go to the shelf and locate your assigned book.
a. Take a photo of the front cover of the book
b. Take a photo of the table of contents
c. NOTE: Attached both images to this document (NO images
from the Internet are allowed!). Shrink the images and place
both images under HERE (DO NOT PUT THEM AT THE END)
Step 2 - Library of Congress Classification
1. [Instructions] Using Google, search for: Library of Congress
Classification outline. The first result will be the link you will
want to select. Click on the link and you will be redirected to
the Library of Congress Classification Outline webpage.
2. Using the call number of you book (which you have
identified in Step 1, 2.k) answer the following:
a. Name of the Class (example K = Law):
b. Name of the Subclass (KZ = Law of nations):
3. Under the Subclass:
a. What is the call number range for your book (Example:
KZ170-173):
b. What is that section or call number range called (Example:
Annuals):
c. Does this properly describe your book? (Make sure you look
at the book on the shelf in the library to answer this question):
i. Yes/No:
ii. Why?:
Step 3 – Searching Your Topic
1. What is your Major (example: History):
2. Write out two search words using the word “AND” to find a
48. book in your major (example: economy and students):
3. Using the Woodbury Library catalog, enter your two search
words (using the word AND) and run a search.
a. How many results did you retrieve from Libraries Worldwide:
b. How many results did you retrieve from Woodbury
University Libraries:
c. How many results did you retrieve from Burbank (if you have
zero results, change your search words to find a result):
4. Follow the instructions carefully: 1st: On the left side of the
screen, click on the link called “Print Book” under Format, 2nd:
Select the second title listed in the results. Now answer the
following questions:
a. What is the full title of the book you found in the Catalog:
b. Names of author(s)/editor(s):
c. Publisher:
d. Copyright (year published):
e. Number of Pages:
f. Place of publication (city/state):
g. What is the OCLC Number:
h. How many (what is the number of) related Subject Words that
have been applied to this item (you can find the number of
subject words at the bottom of the record):
i. What is the Location of this book:
j. What is the Status of the book:
k. What is the full Call Number of the book (example: ND511.5
.K55 A618 2012):
5. Create a proper APA citation:
6. Go to the shelf and locate your assigned book.
a. Take a photo of the front cover of the book
b. Take a photo of the table of contents
c. NOTE: Attached both images to this document. Shrink the
49. images so they can fit on one page. NO images from the Internet
are allowed!
1
Updated: Tuesday, October 04, 2016