CAA 2018 Presentation, included in "Prove It! Publish It! Art History and the Scholarship of Teaching and Learning," session sponsored by Art Historians Interested in Pedagogy and Technology http://bit.ly/2uKheWK
This PowerPoint helps students to consider the concept of infinity.
AHTR SoTL Resources: Kelly Donahue-Wallace, Prove It! Publish It! SoTL Case Studies
1. Prove It! Publish It!
Case Studies
Kelly Donahue-Wallace
University of North Texas
2. Two Case Studies in One
“A Case Study in Integrating the Best Practices of Face-to-Face Art
History and Online Teaching “
Dr. Kelly Donahue-Wallace and Dr. Jacqueline Chanda
Originally published in Interactive Multimedia Electronic Journal of
Computer-Enhanced Learning 7:1 (2005).
Reprinted in Formamente: Rivista Internazionale di Ricerca sul Futuro
Digitale (Universita Telematica Guglielmo Marconi, Italy) 1:1-2 (2006)
Pp. 95-106.
(posted to https://unt.academia.edu/KellyDonahueWallace)
3. Common Assertions in Teaching Presentations
• The students learned/retained more/better.
• The students were more engaged or got a lot out of the experience.
5. Case Study #1
Start with relevant educational theory:
Nelson, R. (2000). "The Slide Lecture, or The Work of Art History in the
Age of Mechanical Reproduction." Critical Inquiry 26, 414-434.
6. Case Study #1
• Design the study:
• Studying the effectiveness of three instructional models:
• Lecture
• Blended: face-to-face lecture plus reinforcement with an online learning object
• Online only: online text plus reinforcement with an online learning object
7. Case Study #1
• Research question:
• Which instructional model helps students learn the
content better?
• Correctness of answers
• Quality of answers using appropriate terminology
8. Case Study #1
• Develop the research method
• Experimental Data Collection
• Exposing participants to different teaching methods
• Three groups: lecture (control), blended, fully online
• Answered three short essay questions face-to-face
(not online)
• Quantitative analysis=how many answered correctly
• Content Analysis
• Assessing their learning through content analysis of
answers to questions.
• Quantitative analysis=how many relevant terms they
employed
9. Case Study #1
• Answering constructed questions
• Correctly identify what a mihrab niche is?
• Correctly explain the typical parts of a
mosque?
• Correctly explain the role of a minaret?
10. Case Study #1
• What did we do?
• We scored the answers overall as either
correct or incorrect.
• Correct= correctly identifying the object,
parts of the plan, function.
11. Case Study #1: Correctness of Answers Results
Questions Face-to-Face Group Face-to-Face/Online
learning object group
Totally Online Group
Mihrab Niche 83% 83% 100%
Plan of Mosque 83% 100% 83%
Role of Minaret 66% 100% 100%
77% 94% 94%
12. Case Study #1
• Content Analysis to Assess Quality of Answers
• Looking at use of terms.
• For each answer, identified 3 relevant terms that a
quality answer would include.
• Created coding scheme for the data = identified
relevant terms and acceptable alternatives.
• Examples of codes:
• Mihrab, niche in qibla wall
• Qibla wall, wall facing Mecca
• Plan, map, floorplan
• Tower, spire, elevated structure
13. Case Study #1
• Content Analysis of Data to Assess Quality
• What did we do?
• Counted codes (terms) in answers.
• Tallied frequency.
• Did not count other relevant terms since they were not in
our coding scheme.
14. Case Study #1: Quality of Answers Results
Questions Face-to-Face Group Face-to-face/Online Totally Online
Mihrab Niche 11 6 13
Mosque Plan 12 18 13
Role of Minaret 9 15 12
59% 72% 70%
16. Case Study #2: Perceptions
• Assessing student perception of interactive
learning objects in an online art appreciation
course.
• Streaming videos demonstrating art-making
processes
• Flash animations
• Types:
• Self-directed creation of objects
• Drag-and-drop self-assessments
• Game-like self-assessments
17. Case Study #2: Perceptions
• Research Questions
• Which types of interactive learning objects did
online students like?
• How did online students perceive the impact of
interactive learning objects on their learning?
18. Case Study #2: Perceptions
• Research Method
• Questionnaire=qualitative and quantitative
• Open for 5 days online
• Voluntary participation
• 46 of 167 students who logged in during the 5 days
participated
19. Case Study #2: Perceptions
• Questionnaire
• Did they prefer video or Flash learning objects?
• Multiple choice
• Which types of Flash learning objects did they
like?
• Multiple choice
• Did they believe the learning objects impacted
their learning?
• Multiple choice
• How did they feel about the learning objects?
• Constructed, open-ended question
20. Case Study #2: Perceptions
• Results (Quantitative)
• Response to whether they believed that the
learning objects contributed to their learning
experience: 70% yes
• Response to which type of learning object
students preferred: 51% videos, 45% Flash
animations
• Response to which type of Flash learning object
students preferred: majority preferred self-
directed creative exercise.
21. Case Study #2: Perceptions
• Results (Qualitative)
• Responses to open-ended question of how
students felt about learning objects:
• Unexpected recognition of how the interactive
components improved learning because they catered
to multiple types of learners.
• Positive responses to being allowed to self-direct.
• Positive responses to instructor’s effort to include
these in the course.
22. Case Study #1 and #2
• Discussion and implications/conclusions:
• What do the results show? What conclusions can be drawn?
• Students do learn online. (#1)
• Students learn a bit more when online and face-to-face combined (blended). (#1)
• Students perceive interactive learning objects to be effective in their learning. (#2)
• Student appreciate the presence of learning objects for showing interest in their
learning. (#2)
• How can this study impact the broader field?
• Recommendations for actions/changes.
• Further research recommendations.
23. Doing Art History Pedagogy Research
• It’s not that hard.
• There are people who can help you.
• It helps the field develop and grow.
• And you can get published in journals like Art History Pedagogy and
Practice.
Editor's Notes
Thank you all for joining us and thanks to AHPT for hosting this session.
My presentation follows on Sara’s by offering step-by-step descriptions of two case studies of art history pedagogy.
And while I probably should have used works by other researchers, I chose to focus on two studies I completed with my colleague, the art educator Dr. Jacqueline Chanda, as a single project the EdMedia conference. The paper was one of five papers selected from among the conference’s several hundred for publication in the now defunct Interactive Multimedia Education Journal and then reprinted in FormaMente, the official journal of the Association of Global Universities In Distance Education.
I chose our study to focus on both because it received such a great response from the Scholarship of Teaching and Learning community, but also because they are the studies I know best!
And it is an easy model to follow!
Since CAA began allowing pedagogy sessions over a decade ago, I’ve been to many of them.
The presenters offer interesting examples of their work with students and I always go home with lots of new ideas and things to try in my classroom.
Most of these presenters made at least one of the two most common claims I’ve heard at these sessions over the years:
The innovation the instructor tried helped the students to learn better or retain more information than traditional lecture models.
The students were more engaged or got a lot out of the experience compared to the normal lecture.
The two case studies I will present offer ways to test these assertions.
At the beginning of online education, there were lots of questions of whether students learned at all in the online environment.
I had just written an online class, so I was curious to know the answer.
So, my co-author and I started with the simple question of did they learn.
Like good scholars, we consulted literature in the field. We found very little written about art history pedagogy at all, and none for online learning at that point. But we read what was available.
We then, as the Scholarship on Teaching and learning recommends, looked for a relevant educational theory to apply to shape our study. Art history is not robust in this area and we didn’t want to use a more generic theory because we agreed that there is something different about teaching with images and about images. And we were interested in capturing whether those unique qualities of art history could translate into other platforms.
So we seized on Robert Nelson’s analysis of art history pedagogy that relies on what he calls the “performative triangle consisting of speaker, audience, and image”—Nelson concluded that this relationship is what teaches, as the instructor looks at and talks about the images, repeatedly modeling accepted interpretive strategies for the students who learn to mimic them over the course of the semester.
In other words, effective teaching in art history relies on the instructor’s presence.
So, what happens when the instructor is removed and replaced with interactive learning objects and text on a screen?
Began to design our study:
Looked at the effectiveness of three instructional models:
Lecture only
Blended—lecture followed by a reinforcing interactive learning object I developed
Online—Online text and reinforcing learning object
Honed the Research question:
Which instructional model helps students learn the content better?
Correctness of answers
Quality of answers using appropriate terminology—could they talk about the works in the language of the discipline
To test the three instructional models, we designed our data collection method.
Opted for an experimental approach
Three groups—6 art history majors with no familiarity with Islamic architecture in each group. Unfortunately at this time our survey class was 100% Western, which has since changed of course.
This part of the study is very simple and, it must be said, “unscientific”. That is, it was a small pilot study with a small sample. With such a small sample of participants, we could not be sure that other researchers repeating this study would get the same results—so a much larger study is warranted. But this is not uncommon in SoTL.
So, we exposed the three groups to three different teaching methods: lecture, blended, and online.
At the end, students took a short essay test on paper. And we counted how many questions they answered correctly.
Then my colleague performed a content analysis of their answers. Looked at the words they used to answer the questions and counted how many relevant terms were employed.
It met the scientific research requirement of replicability: That is our procedures (methodology) employed in the study were reported in sufficient detail that a second researcher could repeat the study.
But the small sample put into question its reliability and validity. Reliability refers to whether if someone repeated the measurements under the same conditions with a different group would they have gotten the same results. Probably not.
Validity refers to the integrity of conclusions that are generated through a research study. Based on the conditions of our study we were pretty sure that X was the cause of Y. That is, any knowledge about Islamic architecture was based on what we taught the study participants.
We could not be confident, however, that we can extrapolate our findings beyond the research context. This last part involves statistical analysis, so our small sample would not hold up. Once you get into bigger groups and you want to extrapolate that all art history majors nationwide would have the same results, you need to involve a statistician. But, fortunately, Scholarship of Teaching and Learning does not demand this as, say, a political science research study.
Conducted the experiment one afternoon, using three rooms and two graduate assistant to monitor—making sure online students did not consult anything other than what we wanted them to see.
I delivered the lecture to the lecture only group while the online-only group read a text with the same information and reinforced this with an interactive animation. While these two groups took the three-question essay test, I delivered the same lecture to the blended group. They then viewed the interactive animation, then took the essay test.
The whole thing probably took 2 hours.
Then we looked at how they answered the questions: did they correctly identify and explain the material they’d been taught in these different ways?
As researcher, I counted their answers.
Results:
4 to 5 of f2f group answered correctly
5 to 6 lecture plus learning object answered correctly
5 to 6 online plus learning object answered correctly
Here you can see why this is an unscientific study—ideally you have a much larger group—we may have just had one less talented student in the face to face group.
But the bigger point was made—doing things online didn’t hurt, in fact, having some online presence: text or interactive learning object (or both) helped.
Then my partner did her content analysis of the answers the students wrote looking at their use of terms.
She identified 3 relevant terms that each answer should have.
But since there are several correct ways to describe things, she employed a coding scheme—what words would be accept as correct and still evidence of a high quality answer
Coding schemes if especially important if there is more than one person analyzing the data so that researchers interpret the answers in the same way.
Content Analysis of Data to Assess Quality
What did we do?—completed by my co-author
Counted codes/terms in answers.
Tallied frequency.
Did not count other relevant terms since they were not in our coding scheme.
Did not count same code twice
*Raw numbers refer to the number of relevant terms in the answers to each question overall in each group. Each group had 6 participants and there were 3 relevant terms we looked for. That means that the best possible score in each box is 18.
+Percentage refers to the percent of those possible terms used.
Again, this small study may or may not have the same results with a bigger sample, but the quality of the answers corresponded to the data on the correctness of the answer: being online did not hurt and having an interactive learning object to reinforce concepts helped students learn.
So, we were able to answer our research question that, yes, students do learn online. Which teaching method showed better results? Lecture was mediocre. Students performed best with face-to-face lecture plus an online learning object and with online text plus learning object.
The second case study is the second part of the same project:
It addresses that second assertion from pedagogy conference sessions: they get so much out of it. Or they are so engaged.
So, we asked the students.
We stuck with the same theoretical model—Nelson’s performative triangle of instructor, image, and audience.
How did students feel about learning when the professor was absent in a fully online class and replaced with interactive learning objects.
Used my online art appreciation class which had two types of learning objects: streaming video with artists from my institution demonstrating media and processes and Flash animations of several types:
Research Questions
Now that we have determined that students learn from the learning objects, which types of interactive learning objects did online students like?
How did online students perceive the impact of interactive learning objects on their learning?
Research Method
Questionnaire=qualitative and quantitative
Open for 5 days in the online class—integrated into the learning management system
Voluntary participation
46 of 167 students who logged in during the 5 days participated
Questionnaire
Did they prefer video or Flash learning objects?
--Multiple choice question
Which types of Flash learning objects did they like?
--Multiple choice question
Did they believe the learning objects impacted their learning?
--Multiple choice question
How did they feel about the learning objects?
--Constructed, open-ended question
Quantitative part of our results
70% of students believed that the learning objects contributed to their learning experience.
51% preferred the videos, 45% the Flash animations
Majority preferred the self-directed creative exercises rather than the self-assessments.
Qualitative Results
Responses to open-ended question of how students felt about learning objects:
Unexpected recognition of how the interactive components improved learning because they catered to multiple types of learners.
Positive responses overall to being allowed to self-direct—to choose and make decisions
Positive responses to instructor’s effort to include these in the course.
Like all SoTL research, ours ended with a discussion of conclusions and implications for the field and future research.