Running head: SAMPLE PAPER 1
A Sample Paper for the Purpose of Correct Formatting
Student Name
Liberty University
Per the Publication Manual of the American Psychological
Association (APA; 6th edition), double-space the
entire paper (p. 229), except with charts or tables. Do not add
any extra spacing. Use Times New Roman,
12-point font. Do not use bold except for headings as necessary
(see page 62 of your APA manual).
Margins are set for 1" on top, bottom, and sides. All page
references will be to the APA manual, 6th edition.
Add two spaces after punctuation at the end of each sentence,
except in the reference list, for the sake of
readability (pp. 87-88). The header on the cover page is
different from the headers on the rest of the paper.
Only the cover page header includes the words Running head
(without the italics; p. 41). The header is flush
left but the page numbers are flush right (see bottom of p. 229).
Make sure the header font is the same as the
rest of the paper. Handouts on how to format the cover page (as
well as other handouts) are available on the
Online Writing Center’s webpage:
http://www.liberty.edu/index.cfm?PID=17176, and a superb
YouTube
video demonstration that provides visualized step-by-step
instructions for setting a paper up in proper APA
format is available at
https://www.youtube.com/watch?v=KUjhwGmhDrI
Note: Comments inside boxes are not part of the formatting of
the paper. Section or page number references
to the APA manual are denoted in parentheses throughout.
Most citations within the body of this paper are
fictional, for instructional purposes only, but are also included
in the reference list for illustrative purposes of
correlating citations in the body of the paper with resources in
the reference list.
. Note: Center the following information in the top half of the
page: title, your name, and school name (2.01, p.
23; 41). Some professors require the course title and section,
the instructor’s name, and the date; add those on
the lines beneath the required title page information. Do not
use contractions in formal papers—in either the
title or the body of the paper (i.e., use “do not” rather than
“don’t”). Titles should include no more than 12
words. Titles use upper and lowercase letters (i.e., “title case;”
20.1, p. 23; see also 4.15 on pp. 101-102).
Prepared by Christy Owen, Brian Aunkst, and Dr. Carmella
O’Hare. Last updated June 28, 2016.
http://www.liberty.edu/index.cfm?PID=17176
https://www.youtube.com/watch?v=KUjhwGmhDrI
SAMPLE PAPER 2
Abstract
Begin your abstract at the left margin (2.04 on p. 27; see also p.
229). This is the only paragraph
that should not be indented. Unless otherwise instructed, APA
recommends an abstract be
between 150–250 words (p. 27). It should not contain any
citations or direct quotes. This should
be a tight, concise summary of the main points in your paper,
not a step-by-step of what you plan
to accomplish in your paper. Avoid phrases such as “this paper
will,” and just structure your
sentences to say what you want to say. The following three
sentences exemplify a good abstract
style: There are many similarities and differences between the
codes of ethics for the ACA and
the AACC. Both include similar mandates in the areas of ----, -
--, and ---. However, each differs
significantly in the areas of ---, ---, and ---. For more detailed
information, see “Writing an
Abstract” at
http://www.liberty.edu/academics/graduate/writing/?PID=12268
This is just now at
168 words, so take a moment to eyeball how brief your abstract
must be. Think of your paper as
a movie, and the abstract as the summary of the plot that you
would share to draw people’s
interest into wanting to come and see your movie. Same thing:
you want to really hook and
intrigue them. What you have to say is important! Still only at
221 words here; remember to try
to stay under 250, unless your professor advises otherwise. The
keywords noted below highlight
the search terms someone would use to find your paper in a
database; they should be formatted
as shown (indented ½”, with the word “Keywords” in italics,
and the few key words in normal
print, separated by a comma.
Keywords: main words, primary, necessary, search terms
http://www.liberty.edu/academics/graduate/writing/?PID=12268
SAMPLE PAPER 3
A Sample Paper for the Purpose of Correct Formatting
The title of your paper goes on the top line of the first page of
the body. It should be
centered, unbolded, and in title case (all major words—usually
those with four+ letters—should
begin with a capital letter) --- see figure 2.1 on p. 42 and 4.15
on pp. 101-102. You can either
give a brief introductory paragraph below that or go straight
into a Level 1 heading. In APA
format, the Introduction never has a heading (simply begin with
an introductory paragraph
without the word "Introduction"); see first paragraph of section
2.05 on page 27, as well as the
first sentence under the bolded headings on page 63 of your
APA manual (American
Psychological Association [APA], 2010). As shown in the
previous sentence, use brackets to
denote an abbreviation within parentheses (third bullet under
4.10). Write out acronyms the first
time mentioned, such as American Psychological Association
for APA, and then use the
acronym throughout the body of the paper (4.22; note the
section on underuse, however, at the
top of p. 107).
Basic Rules of Scholarly Writing
Most beginning students have difficulty learning how to write
papers and also format
papers correctly using the sixth edition of the APA manual
(APA, 2010). However, the Liberty
University Online Writing Center’s mission includes helping
students learn how to be
autonomous, proficient writers, and thus this sample paper is
designed so it cannot be used as a
template for inserting the correct parts. For the purpose of
instruction, this paper will use second
person (you, your), but third person (this author) must be used
in most student papers. First
person (I, me, we, us, our) is not generally permitted in
scholarly papers. Students should refrain
from using first or second person in academic courses (even
though the APA manual appears to
encourage this in other writing venues) unless the assignment
instructions clearly permit such (as
SAMPLE PAPER 4
in the case of personal reflection sections or life histories).
Though some written assignments
will not require an abstract, understand that APA generally
requires one unless otherwise stated
in your assignment instructions or grading rubric.
Heading Levels—Level 1
This sample paper uses primarily one level of headings (Level
1), so each heading
presented herein is centered and in boldface. APA style,
however, has five heading levels, which
will be demonstrated briefly for visual purposes. See page 62
of your APA manual (APA, 2010)
if employing more than one level. Level 1 headings are bolded
and in title case — capitalize
each major word (usually those with four or more letters),
including hyphenated compound
words. Four-Year Pilot Study on Attachment Disorders, and
Self-Awareness of Pollen are
examples of headings with compound words. Do not capitalize
articles (a, an, the) in headings
unless they begin a title or follow a colon.
Level 2 Heading
Level 2 headings are bolded, in title case, and left-justified.
The supporting information
is posed in standard paragraph form beneath it. Never use only
one of any level of heading. You
must use two or more of any level you use, though not every
paper will require more than one
level.
Level 3 heading. Is bolded, indented ½”, in sentence case (only
the first word should
begin with a capital letter in most cases), and ends with a
period. Add two spaces, then begin
typing your content on the same line, as presented in this
paragraph.
Level 4 heading. Same as Level 3, except italicized, too.
Level 5 heading. Same as Level 4, but unbolded. Despite
heavy writing experience, this
author has never used Level 5 headings.
SAMPLE PAPER 5
Annotated Bibliographies, Tables of Contents, and Outlines
A few requirements in various assignments are not addressed in
the APA manual, such as
outlines, tables of content, and annotated bibliographies. APA
does not regulate every type of
paper, including those forms. In those cases, follow your
professor’s instructions and the
grading rubric for the content and format of the outline or
annotations, and use standard APA
formatting for all other elements (such as running head, title
page, body, reference list, 1"
margins, double-spacing, Times New Romans 12-point font,
etc.).
That being said, when I organize outlines in APA format, I set
my headings up in the
proper levels (making sure there are at least two subheadings
under each level), and then I use
those to make the entries in the outline. Level 1 headings
become Roman Numbers (I, II, III),
Level 2 headings become capital letters (A, B, C), Level 3
headings become numbers (1, 2, 3),
and Level 4 headings become lowercase letters (a, b, c). Some
courses require “working
outlines,” which are designed to have the bones and
foundational framework of the paper in
place (such as title page, abstract, body with title and headings,
and references), without all the
supporting “meat” that fills out and forms a completed paper
Appendices
Appendices, if any, are attached after the reference list (see top
of p. 230). You must
refer to them in the body of your paper so that your reader
knows to look there (see top of p. 39).
The word “Appendix” is singular; use it to refer to individual
appendices. I am attaching a
sample Annotated Bibliography as a visual aid in “Appendix
A.” You will see that I included
the title “Appendix A” at the top of the page and formatted it in
standard APA format beneath
that.
SAMPLE PAPER 6
Crediting Your Sources
Paraphrasing is rephrasing another’s idea in one’s own words.
Quoting is using another’s
exact words. Both need to be cited; failure to do so constitutes
plagiarism. Liberty University
also has a strict policy against a student using the same paper
(or portions thereof) in more than
one class or assignment, which it deems “self-plagiarism.”
Students who want to cite their own
prior work must cite and reference it just like any other source;
see example in Owen (2012).
Include the author(s) and year for paraphrases and the author(s),
year, and page or paragraph
number for direct quotes. Page numbers should be used for any
printed material (books, articles,
etc.), and paragraph numbers should be used in the absence of
page numbers (online articles,
webpages, etc.; 6.05, pp. 171-172). Use p. for one page and pp.
(not italicized in your paper) for
more than one. Use para. for one paragraph and paras. (also not
italicized in your paper) for two
or more. For example: (Perigogn & Brazel, 2012, pp. 12–13) or
(Liberty University, 2015 para.
8).
Section 6.04 of the APA (2010) manual says, “When
paraphrasing or referring to an idea
contained in another work, you are encouraged to provide a
page or paragraph number,
especially when it would help an interested reader locate the
relevant passage in a long or
complex text” (p. 171). When naming authors in the text of the
sentence itself (called a narrative
citation), use the word “and” to connect them. For example,
“Allen, Bacon, and Paul (2011)
contemplated that . . .” Use an ampersand (&) in place of the
word “and” in parenthetical
citations and reference lists: (Allen, Bacon, & Paul, 2011).
APA’s (2010) official rule is that you must cite your source
every single time you refer to
material you gleaned from it (pp. 15-16). You can vary your
sentence structure to include both
narrative and parenthetical citations in order to avoid
redundancy. There is, however, an
SAMPLE PAPER 7
unofficial trend amongst some professors who require their
students to cite their sources only
once per paragraph (the first time you refer to it, not merely at
the end of the paragraph, which
can be interpreted as an afterthought), despite this being in
conflict with standard APA
formatting. You will want to clarify which your professor
prefers; if in doubt, cite every time.
That being said, APA (2010) has a special rule that excludes the
year of publication in
narrative in-text citations (when you name the authors in the
text of the sentence itself), after the
first citation in each paragraph ... provided that first citation is
narrative (and not parenthetical).
It should continue to appear in all parenthetical citations (see
sections 6.11 and 6.12, pp. 174-
175). If the first citation in the paragraph is parenthetical, then
ALL citations must include the
year. The two examples in 6.11 on pp. 174-175 are subtle, but if
you look carefully, you will be
able to discern this for yourself.
If the material you cited was referred to in multiple resources,
separate different sets of
authors with semicolons, arranged in the order they appear
(alphabetically by the first author’s
last name) in the reference list (Carlisle, n.d.; Prayer, 2015).
Periods are placed after the closing
parenthesis, except with indented (blocked) quotes. Quotes that
are 40 or more words must be
blocked, with the left margin of the entire quote indented ½
inch. Maintain double-spacing of
block quotes. APA prefers that you introduce quotes, but note
that the punctuation falls at the
end of the direct quote, with the page number outside of that
(which is contrary to punctuation
for non-blocked quotes). For example, Alone (2008) claims
(note that there are no quotation
marks for block quotes, as shown below):
Half of a peanut butter sandwich contains as much bacteria as
the wisp of the planet
Mars. Thus, practicality requires that Mrs. Spotiker nibble one
bit at a time until she is
assured that she will not perish from ingesting it too quickly.
(p. 13)
SAMPLE PAPER 8
Usually quotes within quotes use single quotation marks, but
use double quotation marks for
quotes within blocked quotes, since there are no other quotation
marks included within. Also
understand that direct quotes should be used sparingly in
scholarly writing; paraphrasing is much
preferred in APA format. Only use quotes when changing the
wording would change the
original author’s meaning. You cannot simply change one word
and omit a second; if you
paraphrase, the wording must be substantially different, but
with the same meaning. Regardless,
you would need to cite the resource you took this information
from.
Authors with more than one work published in the same year are
distinguished by lower-
case letters after the years, beginning with a. For example,
Double (2008a) and Double (2008b)
would refer to resources by the same author published in 2008.
If there are two different authors
with the same last name but different first names who published
in the same year, include the
first initials: Brown, J. (2009) and Brown, M. (2009).
The names of journals, books, plays, and other long works, if
mentioned in the body of
the paper, are italicized in title case (4.21). Titles of articles,
lectures, poems, chapters, website
articles, and songs should be in title case, encapsulated by
quotation marks (4.07). The year of
publication should always follow the author(s)’s name, whether
in narrative or parenthetical
format: Perigogn and Brazel (2012) anticipated, or (Perigogn &
Brazel, 2012). The page or
paragraph number must follow after the direct quote. Second
(2015) asserted that “paper planes
can fly to the moon” (p. 13). You can restate that with a
parenthetical citation as: “Paper planes
can fly to the moon” (Second, 2015, p. 13).
Citations in the body of the paper should include only the last
names, unless you have
two or more resources authored by individuals with the same
last name in the same year, such as
Brown, J. (2009) and Brown, M. (2009) mentioned above.
Numbers one through nine must be
SAMPLE PAPER 9
written out in word format, with some exceptions (such as
ages—see section 4.32 on page 112 of
your APA manual). Numbers 10 and up must be written out in
numerical format: 4.31(a).
Always write out in word format any number that begins a
sentence: 4.32(a).
Three or More Authors
When referring to material that comes from three to five
authors, include all of the
authors’ last names in the first reference. Subsequently, use just
the first author’s last name
followed by the words et al. (without italics). Et al. is a Latin
abbreviation for et alii, meaning
“and others,” which is why the word “al.” has a period, whereas
“et” does not. Alone, Other, and
Other (2011) stipulated that peacocks strut. The second time I
refer to their material, I would
apply APA’s rule (Alone et al., 2011).
When a work has six or more authors, cite only the last name of
the first author in the
body of the paper, followed by et al., as if you had already cited
all of the authors previously
(Acworth et al., 2011). Note that I had not cited the Acworth et
al. (2011) resource previously in
this paper. For seven or fewer authors in the references, write
out all of the authors’ last names
with first- and middle initials, up to and including the seventh
author. APA has a special rule for
resources with eight or more authors: Write out the first six
authors’ last names with initials,
insert an ellipsis (…) in place of the ampersand (&), and finish
it with the last name and initials
of the last author. See the examples provided in the chart on
page 177 (APA, 2010), as well as
this paper’s reference list for visuals of these variances
(Acworth et al. 2011; Harold et al.,
2014).
Primary Sources versus Secondary Sources
APA strongly advocates against using secondary sources; rather,
it favors you finding
and citing the original (primary) resource whenever possible
(6.17, p. 178). On the rare occasion
SAMPLE PAPER 10
that you do find it necessary to cite from a secondary source,
both the primary (who said it) and
secondary (where the quote or idea was mentioned) sources
should be included in the in-text
citation information. Only the secondary source should be listed
in the reference section,
however. Use “as cited in” (without the quotation marks) to
indicate the secondary source. For
example, James Morgan hinted that “goat milk makes the best
ice cream” (as cited in Alone
2008, p. 117). Morgan is the primary source (he said it) and
Alone is the secondary source (he
quoted what Morgan said). Only the secondary source is listed
in the reference section (Alone,
and not Morgan) because if readers want to confirm the quote,
they know to go to page 117 of
Alone’s book.
Personal Communication and Classical Work
Personal Communications
The APA manual rationalizes the exclusion of references for
information obtained
through personal communication (such as an interview, email,
telephone call, postcard, text
message, or letter) in the reference list because your readers
will not be able to go directly to
those sources and verify the legitimacy of the material. Instead,
these items are cited only in the
body of the paper. You must include the individual’s first
initial, his or her last name, the phrase
“personal communication,” and the full date of such
communication. As with other citations,
such citations may be either narrative or parenthetical. For
example, L. Applebaum advised him
to dip pretzel rolls in cheese fondue (personal communication,
July 13, 2015). The alternative is
that he was advised to dip pretzel rolls in cheese fondue (L.
Applebaum, personal
communication, July 13, 2015). Note that there is no entry for
Applebaum in the reference list.
SAMPLE PAPER 11
Classical Works
Classical works, such as the Bible and ancient Greek or Roman
works, are also cited in
the body of the paper but not included in the reference list. If
you use a direct quote, you must
include the full name of the version or translation you quoted
from the first time you quote from
it, but then you do not name the version or translation again in
subsequent quotes unless you
change versions or translations (6.18, pp. 178-179). For
example, Philippians 2:14 commands us
to “Do everything without complaining and arguing” (New
Living Translation). James 1:27
proclaims that “Pure and genuine religion in the sight of God
the Father means caring for
orphans and widows in their distress and refusing to let the
world corrupt you.” Galatians 5:22
says that “the fruit of the Spirit is love, joy, peace, patience,
kindness, goodness, faithfulness”
(New American Standard). Note that there is no translation
cited for the middle quote, since it
was also taken from the NLT, which was specified in the
immediately-preceding citation as well.
Technically, it would not be necessary or proper to include any
version when you paraphrase the
Bible because all versions essentially say the same message in
each verse, so a paraphrase of one
would apply equally to all versions. However, the APA (2010)
manual is not explicitly clear that
this rule only applies to direct quotes, and for the sake of
consistency and curbing confusion, the
OWC has opted to advise students to include the version the
first time, even for paraphrases.
Lectures and PowerPoints
Course or seminar handouts, lecture notes, and PowerPoint
presentations are generally
treated like personal communications unless they are published
in material that can be readily
retrieved by your audience, like on a public website. When
citing a PowerPoint presentation,
include the slide number rather than the page number. For
purposes of LU course presentations
and lectures, however (which are not readily available to the
public), the OWC advises students
SAMPLE PAPER 12
that there are two options. The first and more proper way is to
cite it as a video lecture with the
URL for the presentation, naming the presenter(s) in the
author’s position. Many of LU's classes
are set up through Apple's ITunes University---search for your
course and find the specific video
at
http://www.liberty.edu/academics/cafe/bb/index.cfm?PID=2556
3. Brewers and Peters (2010)
is an example.
The second option, if you cannot find it on iTunes U, names the
course number and
enough details for others to identify it within that course, in a
sort of book format, with the city,
state, and publisher relating to LU. Peters (2012) is an example
of this. You'll note that in this
particular case, the iTunes U included information on a second
author that was not readily
identifiable in the Blackboard video itself. Usually, you will
find the year of publication in the
closing screen at the end of the presentation.
Dictionary Entries
The proper format for citing and referencing word definitions
from dictionaries differs
from other citations and references because the word defined is
used in the author’s position,
followed by the year (if known, or n.d. if not known). This is
followed by “In” and the name of
the dictionary (i.e., Merriam Webster), and includes a URL to
the webpage if searched online. If
you used a hard copy book, include the standard city, state, and
publisher details. The in-text
citation in the body of the paper would also use the word
searched in the author’s place, as well
as the year: (Heuristic, n.d.).
Exhaustive Samples Available
For a chart of a myriad of different sources and how each is
formatted in proper APA
format, look for the “Downloadable version of the OWL Purdue
information on APA citations”
on Liberty University’s Online Writing Center’s “APA
Formatting” webpage.
http://www.liberty.edu/academics/cafe/bb/index.cfm?PID=2556
3
SAMPLE PAPER 13
Electronic Sources
The APA, author of the APA manual, published a blog entry on
how to cite documents
found on the Internet (see
http://blog.apastyle.org/apastyle/2010/11/how-to-cite-
something-you-
found-on-a-website-in-apa-style.html). It includes a .pdf chart
with all the possible
combinations, depending on what information you have or are
missing. Use this for all online
resources other than LU-course lectures.
APA requires inclusion of a Digital Object Identifier (DOI) in
the references whenever
available. These should be denoted in lower case (doi). Note
that there should be no
punctuation after the doi in your reference list, and no space
between the initials and the number
itself. If you cite “Retrieved from” with a URL, note that APA
(2010) does not include the date
of retrieval “…unless the source material may change over time
(e.g., Wikis)” (p. 192). Some of
the hyperlinks in this paper are activated (showing blue,
underlined text) for the purposes of
visualization, but hyperlinks should be removed in scholarly
papers --- and they should only
appear in the reference list. To do this, right click the hyperlink
in Microsoft Word and choose
“remove hyperlink.” Like DOI’s, there should be no period
after the URL. APA encourages
breaking long URL’s with soft returns (hold down the Shift key
and press the Enter key) at
forward slashes, periods, or underscores to avoid unsightly
gaps. You may have to remove
multiple elements of the hyperlink that linger in those
circumstances.
Final Formatting Tweaks
APA should be double-spaced throughout, with no extra spacing
between lines. It should
also include Times New Romans, 12-point font throughout.
Sometimes when you format your
paper or cut-and-paste material into it, things get skewed. One
quick way to ensure that your
paper appears correct in these regards is to do a final formatting
tweak after you have completed
http://blog.apastyle.org/apastyle/2010/11/how-to-cite-
something-you-found-on-a-website-in-apa-style.html
http://blog.apastyle.org/apastyle/2010/11/how-to-cite-
something-you-found-on-a-website-in-apa-style.html
SAMPLE PAPER 14
your paper. Hold down the “Ctrl” button and press the “A” key,
which selects and highlights all
of the text in your paper. Then go to the Home tab in Microsoft
Word and make sure that Times
New Romans and 12-point font are selected in the Font box.
Next, click on the arrow at the
bottom of the Paragraph tab. Set your spacing before and after
paragraphs to “0 pt” and click the
“double” line spacing. If you are more advanced on the
computer, you might consider changing
the default settings in Word that create some of these formatting
errors, but the steps listed here
will correct them if you don’t have advanced word processing
skills.
Conclusion
The conclusion to your paper should provide your readers with a
concise summary of the
main points of your paper (though not via cut-and-pasted
sentences used above). It is a very
important element, as it frames your whole ideology and gives
your reader his or her last
impression of your thoughts.
After your conclusion, insert a page break at the end of the
paper so that the reference list
begins at the top of a new page. Do this by holding down the
“Ctrl” key and then “Enter.” You
will go to an entirely new page in order to start the reference
list. The word “Reference” or
“References” (not in quotation marks—for singular or multiple
resources, respectively) should
be centered, with no bolding or italics. Items in the reference
list are presented alphabetically by
the first author’s last name and are formatted with hanging
indents (the second+ lines are
indented 1/2” from the left margin). If you include a DOI or
URL, be sure to remove the
hyperlink as addressed above.
One example of each of the primary types of resources will be
included in the reference
list, as cited in the body of paper, for illustrative purposes.
Remember that, for purposes of this
paper only, the sources cite in the body of the paper were
provided for illustrative purposes only
SAMPLE PAPER 15
and thus are fictional, so you will not be able to locate them if
you searched online.
Nevertheless, in keeping with APA style, all resources cited in
the body of the paper are included
in the reference list and vice versa (except for personal
communications and classical works, per
APA’s published exceptions). Be absolutely sure that every
resource cited in the body of your
paper is also included in your reference list (and vice versa),
excepting only those resources with
special rules, such as the Bible, classical works, and personal
communications.
The reference list in this paper will include a book by person(s),
a book whose publisher
is the same as the corporate author, a chapter in an edited book,
a journal article, a webpage
document, a resource with no author, a dictionary entry, one
with no year of publication noted,
two or more resources by the same author in the same year of
publication (arranged
alphabetically by the first word in the title, but with the
addition of letters in the year to
distinguish which one you are referring to in the body of your
paper), two or more resources by
the same author in different years (arranged by date, with the
earlier one first), resources with the
same first author but differing others, a paper previously
submitted by a student in a prior class, a
resource with up to seven authors, and one with more than seven
authors.
Lastly, below are a few webpages that address critical topics,
such as how to avoid
plagiarism and how to write a research paper. Be sure to check
out Liberty University’s Online
Writing Center (http://www.liberty.edu/index.cfm?PID=17176)
for more tips and tools, as well
as its Facebook page
(https://www.facebook.com/LibertyUniversityOWC/).
Remember that
these are only provided for your easy access and reference
throughout this sample paper, but web
links and URLs should never be included in the body of
scholarly papers; just in the reference
list. Writing a research paper
(https://www.youtube.com/watch?v=zaa-PTexW2E or
http://www.liberty.edu/index.cfm?PID=17176
https://www.facebook.com/LibertyUniversityOWC/
https://www.youtube.com/watch?v=zaa-PTexW2E
SAMPLE PAPER 16
https://www.youtube.com/watch?v=KNT6w8t3zDY) and
avoiding plagiarism
(https://www.youtube.com/watch?v=VeCrUINa6nU).
https://www.youtube.com/watch?v=KNT6w8t3zDY
https://www.youtube.com/watch?v=VeCrUINa6nU
SAMPLE PAPER 17
References
Acworth, A., Broad, P., Callum, M., Drought, J., Edwards, K.,
Fallow, P., & Gould, P. (2011).
The emphasis of the day. Melville, PA: Strouthworks. 1
Allen, B., Bacon, P., & Paul, M. (2011). Pericles and the giant.
The Journal of Namesakes, 12(8),
13-18. doi:001.118.13601572 2
Alone, A. (2008). This author wrote a book by himself. New
York, NY: Herald. 3
Alone, A., Other, B., & Other, C. (2011). He wrote a book with
others, too: Arrange
alphabetically with the sole author first, then the others. New
York, NY: Herald. 4
American Psychological Association. (2010). Publication
manual of the American Psychological
Association (6th ed.). Washington, DC: Author. 5
Brewers, G., & Peters, C. (2010). Defining integration: Key
concepts [video lecture]. Retrieved
from https://itunes.apple.com/us/podcast/introduction-to-
integration/id427907777?i=
92371729&mt=2 6
Brown, J. (2009). Ardent anteaters. Merill, NJ: Brockton
Publishers.
Brown, M. (2009). Capricious as a verb. Journal of Grammatical
Elements, 28(6), 11-12. 7
Carlisle, M. A. (n.d.). Erin and the perfect pitch. Journal of
Music, 21(3), 16-17. Retrieved from
http://make-sure-it-goes-to-the-exact-webpage-of-the-source-
otherwise-don’t-include 8
1 Resource with seven authors (maximum allowed by APA
before special rule applies).
2 Typical journal article with doi.
3 Entry by author who also appears as one of many authors in
another resource (single author appears first in list)
4 Multiple authors appear after same single-author resource.
5 Resource with corporate author as publisher.
6 LU video lecture using iTunes U details.
7 Resources by two authors with the same last name but
different first names in the same year of publication.
Arrange alphabetically by the first initials.
8 Resource with no publishing date, and url.
SAMPLE PAPER 18
Double, C. (2008a). This is arranged alphabetically by the name
of the title. Banks, MN: Peters.
Double, C. (2008b). This is the second (“the” comes after
“arranged”). Banks, MN: Peters. 9
Harold, P., Maynard, M., Nixon, L., Owen, C., Powell, C.,
Quintin, J., … Raynard, A. (2014).
Apricot jam: A sign of the times. Endicott, NY: Peace & Hope.
10
Heuristic. (n.d.). In Merriam-Webster’s online dictionary (11th
ed.). Retrieved from
http://www.m-w.com/dictionary/heuristic. 11
Liberty University. (2015). The online writing center. Retrieved
from
https://www.liberty.edu/index.cfm?PID=17176 12
Owen, C. (2012). Behavioral issues resulting from attachment
disorders have spiritual
implications. Unpublished Manuscript: COUN502. Liberty
University. 13
Perigogn, A. U., & Brazel, P. L. (2012). Captain of the ship. In
J. L. Auger (Ed.) Wake up in the
dark (pp. 108-121). Boston, MA: Shawshank Publications. 14
Peters, C. (2012). Counseling 506, Week One, Lecture Two:
Defining integration: Key concepts.
Lynchburg, VA: Liberty University Online. 15
Prayer. (2015). Retrieved from http://www.exact-webpage. 16
Second, M. P. (2011). Same author arranged by date (earlier
first). Journal Name, 8, 12-13.
Second, M. P. (2015). Remember that earlier date goes first.
Journal Name, 11(1), 18. 17
9 Two resources by same author in the same year. Arrange
alphabetically by the title and then add lowercase
letters (a and b, respectively here) to the year.
10 Resource with eight or more authors. Note the ellipse (…)
in place of the ampersand (&).
11 Dictionary entry.
12 Online webpage with url.
13 Citing a student’s paper submitted in a prior class, in order
to avoid self-plagiarism.
14 Chapter from an edited book.
15 LU class lecture using course details rather than iTunes U.
16 Online resource with no named author. Title of webpage is
in the author’s place.
17 Two resources by the same author, in different years.
Arrange by the earlier year first.
JOURNAL OF RESEARCH ON TECHNOLOGY IN
EDUCATION, J9(4), 331-357
Examining the Development of a
Hybrid Degree Program: Using Student
and Instructor Data to Inform
Decision-Making
Audrey Amrein-Beardsley, Teresa S. Foulger, and Meredith
Toth
Arizona Stale University
Abstract
This paper investigates the qtiestions and considerations that
should be discussed by
administrators, faculty, and support staff when designing,
developing and offering a hybrid
(part online, part face-to-face) degree program. Using two Web
questionnaires, data were
gathered from nine instructors and approximately 450 students
to evaluate student and
instructor perceptions and opinions of hybrid instruction and
activities. In comparison to prior
research, the results of this study offer larger and more
significant policy and programmatic
implications for degrees based on the hybrid format, including
instructional technology
training and support for students and instructors, creation of
common class procedures and
expectations, and development of consistent schedules that
maximize benefit and flexibility
for stttdents and instructors. (Keywords: hybrid, online, degree
program, communities of
practice, teacher education, organizational change.)
INTRODUCTION
While online learning has become the focus of much research
and debate
regarding its efficacy in meeting or exceeding student learning
outcomes
(Neuhauser, 2002; Russell, 1999; Skylar, Higgins, Boone,
Jones, Pierce,
& Gelfer, 2005; Summers, Waigandt, & Whittaker, 2005),
hybrid courses
have been largely treated as a subset of distance education and
are seldom
examined as a unique method of course delivery. Due to the
development of
readily available technologies, the potential of hybrid
instruction as a model
that combines these new technological applications with more
traditional
approaches to education has been recognized (Anastasiades &
Retalis, 2001).
While literature exists evaluating online courses (Benbunan-
Fich & Hiltz, 2003;
DeTure, 2004; Overbaugh & Lin, 2006), online degree programs
(Benson,
2003; Snell & Penn, 2005; Wilke & Vinton, 2006), and hybrid
courses
(Donnelly, 2006; Leh, 2002; Riffell & Sibley, 2005), little has
been published
specific to the design opportunities made available by hybrid
degree programs.
Recent studies by the National Center for Education Statistics
(Waits &
Lewis, 2003) and The Sloan Consortium (Allen & Seaman,
2006) show a
growing appeal and acceptance of online learning. However,
little is understood
about effective program design when multiple courses are
linked in a formal
degree program.
Drawn by the appeal of a model that combines the flexibility of
online
learning with the benefits of in-class meetings and activities, a
teacher education
college in a university in the southwest United States chose to
investigate
Journal of Research on Technology in Education 331
the hybrid model as a new delivery method for its teacher
preparation
undergraduate degree program. Utilizing a survey research,
mixed-methods
approach, this study was largely exploratory in nature and
sought to answer
the following research question: What policy and programmatic
issues should
be discussed by administrators, faculty, and support staff when
designing,
developing and offering a hybrid degree program?
Through an analysis of student and instructor perceptions of
hybrid course
design and instruction coupled with administrative directives,
the researchers
sought to understand the concerns of each group. This study
documents the
knowledge brokered between students, instructors and
administrators, and
provides information to stakeholders that will inform degree
program decisions
and promote common practices across classes.
LITERATURE REVIEW
Compared to other areas of education research, the field of
online learning
is still relatively new, and consistent definitions or methods of
categorization
have yet to be established. Classifications of online learning
vary in a number
of ways, such as the technologies employed (Garrison, 1985),
teaching and
learning methods (Misko, 1994), pedagogical approaches
(Dziuban, Hartman
& Moskal, 2004), and where the design lies on the continuum
from fully face-
to-face to fully online (Allen & Seaman, 2005; Twigg, 2003).
Some scholars
do not draw such clear distinctions and instead describe as
"hybrid" any course
that combines traditional face-to-face instruction with online
technologies
(Swenson & Evans, 2003).
For the purposes of this study, the researchers use the hybrid
terminology
already in use by our university administration. This definition
aligns with
that of the Sloan Consortium (Allen &: Seaman, 2006) as a
delivery method
that blends face-to-face and online instruction. More
particularly, it aligns
with Twigg's hybrid model, which offers a more specific
definition referring to
the "replacement" of traditional class time with out-of-class
activities such as
Web-based resources, interactive tutorials and exercises,
computerized quizzes,
technology-based materials, and technology-based instruction
(Twigg, 1999).
To facilitate the transition from traditional face-to-face to
hybrid courses,
Aycock, Garnham, and Kaleta (2002) recommend instructors
start small
by redesigning an activity or unit of a course, then augment the
process in
subsequent semesters. When multiple hybrid courses are fully
implemented,
the hybrid degree program will accommodate the needs of
today's students
by offering a program that is accessible and flexible (Bonk,
Olson, Wisher, &
Orvis, 2002; Graham, Allen, & Ure, 2003; Sikora, 2002). This is
particularly
relevant when students taking multiple courses in a given
semester attempt to
schedule classes and internships in ways that support demands
on their time.
Over the last several decades, most research on courses that
blend face-to-face
and technology-mediated instruction has focused on the way
technologies such
as audio recordings (LaRose, Gregg, & Eastin, 1998), television
(Machtmes
& Asher, 2000), computer conferencing (Cheng, Lehman, &
Armstrong,
1991), or course management systems (Summers, Waigandt, &
Whittaker,
332 Summer 2007: Volume 39 Number 4
2005) can be used to provide instruction as effective as that of a
traditional
face-to-face classrooms. Literature specific to hybrid courses
has followed this
trend and also reveals an emphasis on student achievement
(Boyle, Bradley,
Chalk, Jones, & Pickard, 2003; McCray, 2000; Olapiriyakul &
Scher, 2006;
O'Toole & Absalom, 2003) or the affective factors most valued
by students
or instructors in hybrid courses (Ausburn, 2004; Bailey &
Morais, 2004;
Parkinson, Greene, Kim & Marioni, 2003; Woods, Baker, &
Hopper, 2004).
More recently, attention has shifted from the technology itself
to an emphasis
on the pedagogical approaches that should lead the way
(Bennett & Green,
2001; Buckley, 2002; Reeves, Herrington, & Oliver, 2004;
Twigg, 2001).
Adding online technologies complicates instruction. Quality
online
instruction must incorporate learning theory and practices from
traditional
face-to-face courses as well as effective pedagogical use of
technology (Yang &
Cornelious, 2004). Since instructors rely on a number of factors
to accomplish
their programmatic goals, those that contribute to successful
instructional
design and delivery are difficult to pinpoint in degree programs,
whether online,
hybrid, or face-to-face (Moore, 1993).
Yet, if institutions interested in exploring hybrid delivery focus
only on
the design and delivery of individual course offerings, problems
such as
disjointedness, a lack of "program" focus, and overall poor
quality can arise
from neglecting to examine the program as a whole (Husmann &
Miller,
2001). Limited knowledge is available regarding the
programmatic implications
of hybrid design (Phipps & Merisotis, 1999), the focus of this
study.
As allies in the learning process, faculty and administrators
must take time
to identify the factors influencing student satisfaction, adapt
coarse design and
structure to meet diverse student needs, and actively engage in
the learning
process with students (Young, 2006). The present study seeks to
fill this gap in
the literature by understanding administrative directives and
gathering input
from student and instructor communities to identify the larger
and more
significant policy and programmatic implications related to
designing and
developing hybrid degree programs.
THEORETICAL ERAMEWORK
Participation in Communities of Practice
Within any organization, groups of people associated with a
common practice
naturally come together to share success and failures and
brainstorm new ideas.
This is a naturally occurring phenomenon of a healthy system
(Wenger, 1998).
Rogers (2002) observed that although opportunities for
individualized learning
are increasing, there are significant advantages to group
learning. Although
struggles are more likely to arise within groups and group work
requires certain
levels of maturity among participants (Goleman, 1995;
Mezirow, 2000), there
are definite advantages for groups in the learning process,
including (a) groups
can provide a supportive environment, (b) groups create
challenges unavailable
in isolated learning situations, (c) groups build more complex
cognitive
structures due to the representation of a variety of experiences,
and (d) groups
are dynamic and can become a community of practice as they
draw in members
(Rogers, 2002).
Journal of Research on Technology in Education 333
The Communities of Practice learning theory (CoP)
encompasses these
elements of collaboration within groups and organizational
systems. In a
single CoP, members represent unique experiences and
knowledge, but unite
for the purpose of improving their common practice. These
collaborative
experiences form naturally based on the needs of the
participants (Sumsion &C
Patterson, 2004). Once formed, the participants develop ways of
maintaining
connections within and beyond their community boundaries
(Sherer, Shea,
& Kristensen, 2003). Constituencies outside the CoP might
include those at
various levels within the organization, some outside of the
organization, and
newcomers attempting to enter the CoP. When individuals are
involved in
multiple CoPs, transfer of knowledge from one CoP to the other
can occur. It
is difficult, however, for newcomers in unfamiliar communities
to understand
the community workings as fully as long-standing members
(Brown & Duguid,
2000; Lave & Wenger, 1991; Wenger, 1998).
Boundary Brokers and Trajectories
In some cases, CoP members can take on the role of boundary
brokers to
expedite organizational change (Sherer, Shea & Kristensen,
2003). When
members of a community exist on the periphery and broker
information
with another CoP, a boundary trajectory occurs (Wenger,
McDermott, &
Snyder, 2002). In such cases, the links between the CoPs cause
boundaries to
expand and create a practical mechanism for greater
understanding between
communities (Iverson & McPhee, 2002). In this way, boundary
brokers
seamlessly expand access to resources within relevant
communities (Sherer,
Shea, & PCristensen, 2003), especially in organizations that
nurture membership
in multiple communities (Kuhn, 2002). However, it is a very
delicate challenge
to sustain an identity in this type of social setting, as those who
translate,
coordinate, and align perspectives through ties to multiple
communities must
be able to legitimately influence the "development of a practice,
mobilize
attention, and address conflicting interests" (Kuhn, 2002, p.
109).
Although organizations can support infrastructural investment
for CoPs,
CoPs fijnction best when members engage in authentic
interactions and
negotiations based on the needs of the members. These needs
bring them
together in a meaningful way surrounding their individual
identities, roles,
intentions, realities, and agendas (Thompson, 2005). This
balance between
administrative or professional development forces and the
organic needs of
members that choose to engage in the inquiry process reaffirms
the need
for a professional development environment that embraces CoP
functions
and empowers CoP members (Cousin &: Deepwell, 2005;
Foulger, 2005;
Thompson, 2005).
Situating This Study
As part of a college initiative to explore new modes of
delivering degree
programs, the college dean approached the Elementary
Education department
chair (the largest department in the college) and one technology
instructor
with the charge of creating capacity" to offer online courses. To
develop
and evaluate the courses, the technology instructor solicited
guidance from
334 Summer 2007: Volume 39 Number 4
/ Student
CVwiifniifiity
V
"-<
Instructor
CoP
--- --'
dmintfiRtTiition
CoP
j
BOUNDARY
BROB^ERING
Figure I. Findings from this study were drawn from the
convergence of student,
instructor, and administrator perspectives.
information technology administrators, instructional design
support personnel,
college administrators, department chairs, instructors, and
students. After
consulting with these stakeholders, the college offered a two-
day intensive
seminar on designing and developing hybrid courses.
Sixteen instructors, including the Elementary Education
department chair,
volunteered to participate in the hands-on seminar and redesign
a two-week
component of one of their face-to-face courses as a hybrid unit
offered half
online and half face-to-face. All of the instructors were
proficient with online
technology tools and received additional training in hybrid
course design
and instruction, but they had never taught online before. I h e
instructors
collaborated to redesign their units using asynchronous
technologies that
employed Blackboard tools and methods (Blackboard, version
6.2, the
university-sponsored course management system).
Because communities of practice are not necessarily fixed
systems, and
because each interaction among members has a multitude of
influences
(Wenger, 1998), a prescriptive vision for the hybrid program
could not be
determined at the conception of this hybrid investigation. This
lack of rigidity
was embraced by instructors participating in the study.
From the CoP perspective, the hybrid instructors in this study
negotiated a
balance between the identities associated with three specific
social forces (see
Figure I). The following issues were expressed prior to the
beginning of this
study and were used to inform the development of the hybrid
design:
• Administration Community of Practice: Administrators were
most
concerned with decreasing use of classroom space, providing
training and
Journal of Research on Technology in Education 335
support to hybrid instructors, and creating incentives for
participation.
Instructors served as peripheral participants and advisors to the
Administrative CoP at the onset of the study by communicating
the
need to develop policies and procedures supportive to the
transformation
of a face-to-face to hybrid degree program.
• Hybrid Instructor Community of Practice: Teacher education
instructors
who elected to redesign a previously-taught course into a hybrid
course
were initially concerned with maintaining high standards and
student
accountability, assuring that technology would be used to
enhance
instruction, and understanding which activities were best suited
for face-
to-face or online environments.
• Hybrid Student Community: Instructors initially knew very
little
about the student perspective. However, they realized the
importance
of brokering knowledge from the student community as a way to
understand their perspective and use the information to
influence
instructor and administrative decisions.
As the college devised initial plans for the development of the
hybrid program
and began implementation, purposefully exchanging information
between these
three critical stakeholder groups led to a greater understanding
of the realities of
each group. These initial conversations brought about a broader
understanding
of the contributing practices of administrators and instructors
believed to be
critical for student success in the hybrid degree program.
Through the methods
employed in this study, the researchers probed the instructor
and student CoPs
more deeply to determine the most effective practices and how
this knowledge
could inform the administrative CoP to advance the hybrid
program.
METHODS
Data reported in this study were collected from instructors and
students as
they experienced the college's first attempt at transforming
traditionally face-to-
face instruction to a hybrid format.
Instructor Sample
After completing the seminar on hybrid course design and
instruction, nine
of the 16 instructor participants (56%) committed to teaching
their hybrid
unit the following semester. At the conclusion of their units, all
nine instructor
participants completed the online Instructor Hybrid Evaluation
Questionnaire
(see Appendix), designed to capture instructors' perceptions of
their students'
and their own experiences with the hybrid unit. One instructor
completed the
questionnaire twice for two different courses (response rate =
100%).
Student Sample
Following the directions of the primary researchers in this
study, instructor
participants distributed the online Student Hybrid Evaluation
Questionnaire
(see Appendix) to their students who participated in their hybrid
unit
of instruction. To assure a high response rate, each instructor
solicited
participation directly from their students by explaining to
students that their
336 Summer 2007: Volume 39 Number 4
feedback would help improve the overall program, particularly
for fliture
students. Each of the nine instructors distributed the
questionnaire directly
to their students. Some students participated in more than one
course
where hybrid units were offered; these students were
encouraged to take the
questionnaire multiple times based on their unique experiences
in each course.
In cases where the relative response rate was of concern,
students were sent one
reminder to participate.
A total of 413 out of approximately 450 students completed the
online
questionnaire (response rate = 92%). The high response rate is
probably due
to the fact that students completed the anonymous online
questionnaire
during normal class time or were held accountable for their
participation,
predominantly through class credit.
Instrument
Rather than examining success factors for students in these
courses, two
complimentary Web questionnaires were designed to gather
information
regarding student and instructor perspectives of the hybrid
instruction and
activities, the hybrid degree program, and course planning and
design (Benson,
2002). Similar questionnaire forms allowed for comparative
analyses between
instructor and student participants and more holistic analyses
across groups.
Part I of both the instructor and student questionnaires collected
general
demographic, technology access, and course and programmatic
information.
Part II presented instructors and students with a list of
technology tools
provided within Blackboard. If tools were used, instructors and
students were
asked to respond to Likert-type items indicating the extent to
which the tools
enhanced a) the instructor participants' perceived abilities to
provide quality
instruction and b) the student participants' perceived abilities to
learn.
Part III, Section 1 asked instructors and students to indicate
their levels
of agreement with statements about affective factors of hybrid
instruction.
This section was adapted from materials provided online as part
of the
Hybrid Course Project at the University of Wisconsin-
Milwaukee (Learning
Technology Center, 2002). To encourage students and
instructors to read and
reflect on each statement and decrease the likelihood that they
would select the
same value for continuous items, positive and negative
statements were placed
in a randomized sequence. Part III, Section 2 asked instructors
and students
to indicate their overall levels of agreement regarding face-to-
face and online
environments.
Part IV asked students and instructors to provide insights they
thought would
be useful to instructors and the college regarding online
activities, hybrid course
development, and hybrid degree program development.
Instrument Internal-Consistency Reliability
Estimates of reliability were calculated for each section of the
student and
instructor Web questionnaires. Coefficient-alpha estimates of
internal-
consistency reliability were computed for Parts II and III
(Cronbach, 1951).
Coefficient-alpha estimates for the positive and negative
statements built into
Journal of Research on Technology in Education
Table 1: Coefficient Alpha Estimates of Reliability
Part II:
Part III
Factors
Part III
Factors
Blackboard Tools
, Section 1: Affective and Personal
, Section 2: Overall Agreeability
Student Web
Questionnaire
0.724
0.718
0.853
Instructor Web
Questionnaire
0.791
0.828
0.744
Part III, Section 1 were adjusted so that responses could be
interpreted on the
same scale, and inversely related estimates would not cancel
each other out. All
sections of the Web questionnaires yielded acceptable alpha
levels (see Table
1 for coefficient-alpha levels of both instruments) and
warranted their use
for the purposes of this research study. Values below .70 are
oft:en considered
unacceptable (Nunnally, 1978).
Methods of Data Analysis
Frequency statistics were used to analyze each demographic,
course, and
programmatic question in Part I of both Web questionnaires. For
Parts II and
III, descriptive statistics were calculated using participant
responses to the
Likert items, and means were rank ordered to illustrate levels of
participant
agreement per item. T-tests using independent samples were
also used to test
for significant differences between the opinions of instructor-
and student-
participant groups.
Participant responses to the open-ended, free-response items in
Part IV
were read, coded, and reread, and emergent themes were
categorized into
bins (Miles & Huberman, 1994). Once bins became focused and
mutually
exclusive in nature, the items included within each bin were
collapsed into
categories, quantified, and labeled. Overall themes were
validated by instructor
participants during a focus group conducted by the researcher
participants, and
the themes were left: intact, without any additions or deletions.
These themes
will be discussed further in the Implications section of this
study.
RESULTS
Part I: Demographic Information and Technology Access
In Part I of the Web questionnaire investigators gathered
demographic,
technology access, and course and programmatic information
from student
and instructor participants. More than 60% of student
participants primarily
used a personal desktop computer to complete coursework.
About 20% of
student participants used portable laptops, and 10% completed
online lessons
and assignments on campus at the student computer center or
the library.
Approximately 90% of student participants accessed the Internet
through a
high-speed connection, while about 10% relied on dial-up
networks.
Students reported that an average of 3.7 of their courses (out of
a maximum
of five courses students may take each semester) involved some
hybrid
338 Summer 2007: Volume 39 Number 4
Online gradebook
Course document downloads
Internet sites/links
E-maii between instructor and
student
E-mail between student and student
SmatI gnsup discussion board
Full class discussion board
Online assignment submission
Online Qutzzes/Tests
Digital drop box
-
•̂1
.0
^ ^
122
•̂1
12,/
4
I!?
35
4
136
13.3
3 0
4
|;!8
0.0 0 5 1.0 1.5 2.0 2 5 3 0 3 5 4 0 4 5 5 0
]
Figure 2. Blackboard tools ranked by students and instructors
from, most to least
useful.
component during the semester of study. Instructor participants
indicated that
they replaced an average of six face-to-face classes (out of
approximately thirty
total instructional days) with online instruction. The total
number of face-to-
face days replaced with online instruction ranged from a low of
two to a high of
10 days.
Part II: Student and Instructor Perceptions of Blackboard
Learning Tools
In Part II of the Web questionnaire, student and instructor
participants
identified the Blackboard tools they found most and least useful
in terms of
enhancing student learning in the hybrid format. The closer
each item mean
is to 5, the more the student or instructor participants agreed
with each
statement. For the purposes of this study, the results from this
section are used
to provide larger programmatic considerations and
recommendations (see
Figure 2).
Of the Blackboard tools identified in the Web questionnaire,
students found
the online grade book and announcements most useful. Students
appreciated
instructors who graded assignments and posted them in the
grade book in
a timely and efficient manner and criticized instructors who did
not use the
grade book effectively or did not post grades soon after
reviewing student work.
Students appreciated that they could monitor their progress in
courses using the
grade book and thought that more college instructors should use
the tool.
Journal of Research on Technology in Education 339
Although students appreciated the use of announcements, almost
50% of
student participants expressed a need for instructors to be
consistent with
announcement frequency and to provide clear and simple
written information.
Students also requested that instructors e-mail students after
posting an
announcement, particularly if announcements are not used as
part of the
normal class routine.
Students found the course document downloads, Internet sites
and links, and
e-mails sent to them from the instructor equally useful in terms
of technology
tools that enhanced their learning. Some students expressed
concern regarding
their ability to find or download course documents and others
had difficulty
visiting and spending time on Internet sites if they had only
dial-up access.
Students appreciated when instructors e-mailed them to clarify
components
of the coursework and most appreciated instructors who
responded to student
e-mails in a friendly, "timely" manner. Students were very
critical of instructors
who did not respond to student e-mails in a "timely" manner,
responded in
an unfriendly manner, or did not respond at all. Students
questioned whether
instructors who do not respond to e-mails in such a manner
should be
implementing online activities in their courses. Because
students do not meet as
often in a hybrid setting, the primary communication method
between students
and instructors is e-mail. When instructors did not respond in a
timely manner,
students expressed high levels of frustration and outright anger.
In general, students felt that discussion boards were more useful
than in-
class discussions because students could take their time to
compose a response,
students were required to participate online while they were not
required to
participate in face-to-face discussions, and students who
normally do not
participate in class were not as reluctant to express an opinion
online. Students
also found small-group discussion boards to be particularly
useful when quizzes
and tests required them to use the knowledge gained from such
discussions.
Despite these benefits, students felt that discussion board
assignments
sometimes became redundant, were not always useful, and
sometimes detracted
from more important course activities or assignments.
Instructors disagreed with their students in two ways. First,
instructors found
the Internet sites and links and the full class discussion board to
be significantly
more useful (p < .05) than their students found these technology
features.
Second, instructors found student-to-student e-mail, online
assignment
submissions, course document downloads, small group
discussion boards, and
online quizzes and tests as significandy less useful (p < .05)
than their students
found these technology tools.
Part III: Student and Instructor Responses to Affective Items
In Part III of the Web questionnaire, student and instructor
participants
indicated their level of agreement with thirteen affective
statements about
hybrid instruction. The closer each item mean is to 5, the more
the student or
instructor participants agreed with each statement (see Figures
3, 4 and 5).
Of the first 10 statements (Section 1), five were written in a
favorable
vernacular and five were written in an unfavorable vernacular.
For this reason,
340 Summer 2007: Volume 39 Number 4
Because of Ihe online oonrponents
in tiis course. I was (rry students
vwre) betsf able k> balance nv
(their) couneMKk Mth olher home
s in tijs course helped
me (them) leam more about t i e
I h e tochmlogy U5K) enhanced iTV
(nv students) understanding of the
ccurseworti
ltbundthatlv«Bs(nv students
M r e ) able to conW tie paoe of nv
(tieir) teaming rrore efle(*/Bly
because of the May Ihis course used
online t o l s
I (My student5) Ibund that I ves
(they were) beilBf able to dewtcp
m/ (tierr) conmjnicatKxi slqlls
because of Ihe technology tools
used.
Figure 3. Instructor and student responses to favorable,
affective questions.
results have been split into two sections and ranked from high
to low levels of
agreement.
Students agreed that the online components of their classes
helped them
balance their coursework with other home and/or work
responsibilities
and learn more about subject matter. Students most disagreed
that they
had to spend too much time trying to get access to a computer to
do the
coursework effectively, and that they were at a disadvantage
because they did
not understand how to use the technology tools as well as the
other students.
If the response rate had been lower, use of a Web questionnaire
might suggest
that students with technology issues were underrepresented in
the sample of
students who participated; however, this was not the case.
Students were most
ambivalent (mean = 2.5) towards whether online learning was
better than
learning in a face-to-face environment.
Instructors viewed the impact of online instruction on their
students'
learning significantly more favorably than did their students.
Instructors were
significantly more concerned than students with whether some
students were
disadvantaged by a lack of technology skills. Instructors were
significantly less
concerned than students with whether the time spent online
would have been
better spent in the classroom and whether online experiences
made students feel
less connected with their instructors (p < .05).
Part III, Section 2 included three overarching, open-ended
questions designed
to capture student and instructor participants' overall opinions
and suggestions
Journal of Research on Technology in Education 341
The tme i (my students) spent oniine
MHjkl have been beOer spent in ihe
dassroom
•Rie technology tools made me (my
stu<Jent5) feel less connecW wth
the instructor (me as their instructor)
The technology tools made me (my
students) *Bei less osnnected with
Ihe oiher students in this course
I felt tiat I was (some students were)
at a disadvantage in Ihis course
because I (Ihey) didni understand
how to use the technology tools as
well as other students
I (My students) had to spend too
mudi time trying to get acoess to a
a)mputer to do the coursevwrk
etfectiwiy.
Figure 4. Instructor and student responses to unfavorable,
affective questions.
regarding hybrid instruction. Each item mean is illustrated. The
closer each
mean is to 5, the more the student or instructor participants
agreed with each
statement.
Overall, students and instructors agreed that it would be a good
idea if the
entire teacher education program involved face-to-face and
online activities and
if other courses incorporated more online activities. They also
believed that the
content of the courses was well suited for a combination of
face-to-face and
online activities. Instructors agreed at higher levels, but
students and instructors
ranked the three statements in the same order by similar levels
of agreement.
Part IV: Student Responses to Open-Ended Questions
In Part IV of the Web questionnaire, student and instructor
participants
were asked to provide information or insights they thought
would be useful
to instructors and the college regarding online activities and
hybrid course
development.
In response to the request for information or insights they
thought would
be useful to their instructors regarding hybrid activities, student
participants
responded with enthusiasm for increasing hybrid courses across
the college,
with the stipulation that the hybrid components be beneficial to
students
and that assignments be of reasonable length and pertinent to
the students'
professional development. Students requested that instructors
plan online/in-
class schedules in collaboration with other instructors to
maximize fiexibility
342 Summer 2007: Volume 39 Number 4
O«rall.llhink the content of this
course is well-suited for a
a)mbinabon of boe-to-face and
oriiineactivttei
Overall. I timk it would be a good
idea if olher courses would
incorporate more online advities
Overall.lMnkitwwIdbeagood
idea if Ihe entire program i n w l ^
face-to-lace and online activites
45
0.0 0.5 10 1.5 2.0 25 3.0 35 4.0 45 50
oinalructors
Figure 5. Instructor and student responses to overarching, open-
ended questions.
and minimize confusion. In addition, students felt that the
online/in-class
schedule should be organized and disclosed to students at the
outset of
a course so they would have the opportunity to opt out of a
course with
online components when scheduling their semesters. In
addition, students
expressed frustrations with some technologies (such as trial
software) they felt
compromised their opportunities to succeed in an online
learning environment.
Instructor participants suggested that all instructors hold
students accountable
for the online work associated with any given course while
maintaining a
certain degree of flexibility, especially given students' busy
schedules and the
challenges they might face in learning new technologies.
Instructors also noted
that hybrid activities should not create additional work for
students, but should
replace less valuable work normally conducted in a face-to-face
setting. Finally,
instructors recommended that all instructors be clear, organized,
responsive,
and timely when responding to e-mail and other student
communications, such
as discussion boards.
The Web questionnaire also prompted students for information
or insights
they thought would be useful to the college regarding hybrid or
online
activities. A strong majority of students responded favorably
towards hybrid
instruction, but stated the college should proceed with caution.
Approximately
10% of student participants did not encourage the college to
offer more hybrid
Journal of Research on Technology in Education 343
courses or activities. This group of students felt that face-to-
face interaction,
rather than some online and some face-to-face interaction, was
more conducive
to their learning. These students also expressed frustrations that
they were
not made aware of the online components before opting in to the
course(s).
In general, student respondents thought that college instructors
should not
implement online activities without first obtaining the skills to
teach in an
online environment, committing to respond to students in a
timely manner, and
organizing their materials in a way that is conducive to online
instruction.
All instructor participants commended the college on its
exploration of
a hybrid degree program and recommended that as the college
progresses,
evaluative efforts continue in order to ensure that hybrid
instruction is
implemented in a way that best benefits student learning.
Instructors also
requested that more training opportunities be made available to
help them use
existing tools, integrate online activities, and effectively
collaborate with each
other.
IMPLICATIONS
During the process of reading, coding, and identifying emergent
themes
representing the three community perspectives, several
categories of
programmatic issues were noted as factors contributing to the
success of the
hybrid program. When these issues and implications were
reviewed with
instructor participants during a focus group, the instructor
participants
validated the implications and the identified themes were left
intact. These
implications are programmatic in nature and mostly address the
administration,
yet they impact the different identities within the hybrid degree
program
community. Addressing these recommendations will affect the
success of
instructor course design and student learning.
Develop Program Policy Supportive to Teaching and Learning
in Hybrid
Courses
When registering for courses, students were not informed that
some course
materials, activities, and assignments would be delivered
online. Some students
adjusted well to the hybrid delivery method, but others
expressed frustration
with the unexpected technology requirements and non-
traditional instructional
methods. With the help of administrators, the researchers made
use of a
course catalog footnote and existing Web site that alerts
students that they are
signing up for a hybrid course and explains how these courses
differ from more
traditional face-to-face classes.
It is our recommendation that when developing and promoting a
hybrid
degree program, expectations, instructional and communication
methods,
technical requirements, and benefits of combining the face-to-
face and online
learning environments be fully communicated to students prior
to registration.
Students can then make an informed decision as to whether the
hybrid format
meets their particular learning styles and preferences, schedule,
and other
needs. This communication could take place by providing
information about
the hybrid degree program in college marketing material, during
advising and
344 Summer 2007: Volume 39 Number 4
registration sessions, and in program or course orientations. In
such a manner,
instructors and students will have common understandings
regarding course
design and expectations, and students not wanting to participate
may opt out of
such courses.
Support the Creation of Common Procedures and Expectations
across
Courses
When the hybrid units were developed for this study, instructors
for each
of the courses did not collaborate to develop common class or
instructional
procedures. In some cases inconsistencies from course to course
caused student
confusion and frustration.
It is important to remember the student perspective when
developing a
hybrid program. Some common elements across courses could
positively
impact student understanding and feasibility. Instructor CoPs
should be
encouraged to discuss their class procedures and expectations in
order to
develop common procedures. This is not to say that all
instructors should have
identical procedures, but that collaboration for the purpose of
creating some
level of consistency will benefit students. Common procedures
and expectations
could be developed related to e-mail/discussion board use,
netiquette, use of
course announcements, how to handle a technology snow day
(Hitch, 2002),
technology assistance, method for instructor contact, frequency
and deadlines
for discussion board posts, mechanisms for work submission,
etc.
Allocate Face-to-Face and Online Time across Courses
Most of the students participating in this study enrolled in more
than one
course that used a hybrid format. Because the hybrid units did
not fall in the
same time period during the semester, student schedules were
not consistent
from week to week, causing frequent confusion and aggravation.
Using student
feedback, instructors worked with administrators to standardize
Wednesday
and Thursday as face-to-face days, leaving Monday, Tuesday
and Friday free for
student teaching, internships, and other student activities. This
simple solution
provided more structure for students and less confusion across
courses within
the same semester.
Although face-to-face and online activities should best fit the
needs of
a particular subject area and course (Veronikas & Shaughnessy,
2004),
this study suggests that faculty and administrative CoPs work
together to
coordinate a schedule that outlines specific face-to-face and
online days that
will accommodate students taking multiple hybrid classes in the
program.
Maximum flexibility for students will occur when all courses in
a given semester
follow a similar or complimentary pattern of online and face-to-
face days.
Support Instructor CoPs as they Refine and Adopt Technology
Tools
All instructor participants in this study received a basic
overview of online
technologies during a summer workshop on designing and
developing hybrid
courses. Still, instructors found it difficult to gain an in-depth
working
knowledge of the online tools and features commonly associated
with online
Journal of Research on Technology in Education 345
instruction. The design of activities was inhibited by their
limited knowledge
and familiarity with the available tools. Collaborative
conversations within
instructor CoPs about the functions and features of online tools
appeared to
increase the sophistication of technology use and instructional
design.
Students participating in the study clearly articulated their
preferences
toward certain instructional practices and activities. It was
evident that students
preferred more simplistic methods of delivery (instructor
presentations available
for effortless download), online interactions (straightforward
discussion
boards), and ease in work submission. Instructor CoPs should
discuss the use
of technology tools to support specific learning needs, but
technology that does
not enhance instruction should be reduced or eliminated.
As instructors within a CoP learn about technology tools and
their
instructional uses, they will develop activities that incorporate
the best of both
face-to-face and online delivery methods. A supportive
environment conducive
to exploration, collaboration and cooperation will result in
instructionally-
sound activities and shared practices which will contribute to
the overall quality
of the program. To support this professional development and
growth among
hybrid instructors, administrators should provide mechanisms
for faculty
to collaborate within their CoP and interact with others outside
their CoP,
including instructional designers and technology support staff.
Provide Instructional Design Training and Support for
Instructors
The online questionnaire used in this study prompted instructors
to refiect
on their hybrid units and identify successes as well as areas for
improvement.
The resulting data prompted the need for further professional
development
opportunities related to technology tools and delivery options.
Becoming a good hybrid instructor is a developmental process
and requires
continual nurturing and support in terms of the additional time
it takes to
develop and teach a hybrid course, as well as the adjustment to
delivering
materials, interacting with students, and designing activities for
a Web-based
environment (Kincannon, 2000). When asking instructors to
redesign a course
as a hybrid, administrators should recognize that this design and
development
process is akin to developing a new course, and instructors will
likely need
technology training.
As such, administrators need to support the professional
development of
instructors. This can take place in many ways, including
providing adequate
time over the course of several semesters to collaborate with
other hybrid
instructors, instructional designers, experienced colleagues,
technology trainers
and other personnel; soliciting help from other instructors or
institutions who
have more experience; providing hands-on training
opportunities or one-on-
one tutoring; and providing opportunities for instructors to
share their successes
with each other.
Provide Support for Students to Gain New Skills
Anecdotal evidence gathered during this study indicated that
many students
sought help from one another, upgraded from dial-up to faster
Internet
346 Summer 2007: Volume 39 Number 4
connections at home, accessed the wireless networks on campus
via laptops,
purchased home computers or laptops, and improved their
general technology
skills. It is likely that the need for efficiency in completing
online activities and
assignments drove these changes.
Although it is possible that hybrid degree programs will attract
more
technologically savvy and independent students, it should not be
assumed that
students who enroll in hybrid courses have critical technology
skills (Kvavik,
2005). Those who do not will be disadvantaged by the program
delivery
method. In order for students to focus on course content, it is
critical that
technology not be an obstacle to student access to course
materials and support
resources. As such, hybrid degree programs should identify and
require base-
level technology skills or offer training opportunities that
prepare students with
technology skills before classes begin (Gastfriend, Gowen, &
Layne, 2001).
These minimum technology skills should be communicated in
college materials,
advising sessions, and program or course orientations.
In addition, instructors should not assume that students have
experience
with the technologies used or that they have the ability to adopt
new skills
quickly. Even if students enter the program with a minimum set
of technology
skills, additional training or modeling during face-to-face
classes, and written
procedures and tutorials made available to all students will
decrease concerns
with technology and increase student ability to focus on
content.
Continually Evaluate the Program
Instructors in this study noted that as knowledge was created
and brokered
during seminars and brown bag discussions, through formative
feedback from
students, and via the summative online questionnaire,
evaluation practices
helped them better understand and assess the implications of
hybrid course
and program design. In addition to traditional course
evaluations, ongoing
program evaluation must be implemented to continually
improve instruction
and student learning in any hybrid degree program (Levin,
Levin, Buell, &
Waddoups, 2002). Also, program evaluation and assessment
must be based on
multiple methods and must meet specific standards to ensure
representation of
the program's impact on administrators, faculty and students
(Quality on the
line, 2000).
Normally a new program would undergo rigorous scrutiny, with
intense
ongoing evaluation procedures that lessen over time as issues
are worked out
and satisfaction levels stabilize. However, with technology
playing an integral
role in hybrid courses, as new tools are made available or new
uses for tools
become established, ongoing innovation and refinement of
courses, program
delivery, and program structure becomes more necessary than in
traditional
face-to-face design. If this is the case, then the call for ongoing
program
evaluation policy would be meaningful to administrators,
instructors, and to
students.
Granted, systematically embedding data-driven decision making
within a
hybrid program would require more resources of time and
money than one
might normally commit. Not planning at the onset for continual
innovation
Journal of Research on Technology in Education 347
and evaluation would be a mistake for a hybrid program not
wishing to
compromise quality.
CONCLUSIONS
Although the scope of this study was limited to nine instructors
and their
respective students, the results provide interesting and relevant
findings for those
interested in hybrid program design. The data collected indicate
areas of
success as well as areas for improvement, but overall the hybrid
design was
well received. The implications drawn represent a
comprehensive dataset
and demonstrate practices that must be thoughtfully considered
by program
developers before offering a hybrid degree program. While the
primary factor
in any instructional initiative remains the quality of the
instructional design
(Johnson & Aragon, 2002), the implications identified in this
article intend to
affect the success of students enrolled in a hybrid degree
program directly.
It is hoped that this study will spur further research in this area,
as over time
student profiles will include more technology-sawy populations
needing to
balance education with personal and professional obligations.
For institutions of
higher education wanting to offer innovate programs that
accommodate student
needs, hybrid degree programs may provide the answer. Any
such program
should be strategically designed, coUaboratively developed, and
implemented
within a community vested in offering a successful program.
Contributors
Audrey Amrein-Beardsley is an assistant professor in the
College of Teacher
Education and Leadership at Arizona State University. Dr.
Amrein-Beardsley
holds a PhD in Educational Policy and Research Methods and
specializes in
tests, assessment, and survey research. (Address: Audrey
Amrein-Beardsley, PO
Box 37100, MC 3151, Phoenix, AZ, 85069-7100, Phone:
602.543.6374; Fax :
602.543.7052 ; E-mail: [email protected])
Teresa S. Foulger is an assistant professor in the College of
Teacher Education
and Leadership at Arizona State University. Dr. Foulger holds
an EdD in
Educational Technology and specializes in technology-rich
environments
where collaboration, communities of practice, and innovative
professional
development models spur organizational change. (Address:
Teresa S. Foulger,
PO Box 37100, MC 3151, Phoenix, AZ, 85069-7100; Phone:
602.543.6420;
Fax : 602.543.7052; E-mail: [email protected])
Meredith Toth is an instructional designer with the Applied
Learning
Technologies Institute at Arizona State University. She holds a
M.A. in
Learning, Design, and Technology from Stanford University and
specializes in
technology integration in higher education. (Address: Meredith
Toth, PO Box
37100, MC 1051, Phoenix, AZ, 85069-7100 ; Phone:
602.543.3192 ; E-mail:
[email protected])
References
Allen, I., & Seaman, J. (2005). Growing by degrees: Online
education in the
United States, 2005. The Sloan Consortium. Retrieved
November 25, 2006,
from http://www.sloan-
c.org/publications/survey/pdf/growing_by_degrees.pdf
348 Summer 2007: Volume 39 Number 4
Allen, I., & Seaman, J. (2006). Making the grade: Online
education in the
United States, 2006. The Sloan Consortium. Retrieved March 3,
2007, from
http://www.sloan-
c.org/publications/survey/pdf/Making_the_Grade.pdf
Anastasiades, P. S., & Retalis, S. (2001, June). The educational
process in
the emerging information society: Conditions for the reversal of
the linear model
of education and the development of an open type hybrid
learning environment.
Paper presented at the ED-MEDIA 2001 World Conference on
Educational
Multimedia, Hypermedia & Telecommunications, Tampere,
Finland. ERIC
Document Number: ED466129. Retrieved April 28, 2006.
Ausburn, L. (2004). Course design elements most valued by
adult learners in
blended online education environments: An American
perspective. Educational
Media International, 41 {A), iTJ-iH.
Aycock, A., Garnham, C , & Kaleta, R. (2002). Lessons learned
from the
hybrid course project. Teaching with Technology Today, 8(6).
Accessed online
April 12, 2005 at
http://www.uwsa.edu/ttt/articles/garnham2.htm
Bailey, K., & Morais, D. (2004). Exploring the use of blended
learning in
tourism eduation. Journal of Teaching in Travel & Tourism,
4{4), 23-36.
Benbunan-Fich, R., & Hiltz, S. (2003). Mediators of the
effectiveness of
online courses. IEEE Transactions on Professional
Communication, 46{A), 2 9 8 -
312.
Bennett, G., & Green, F. P. (2001). Student learning in the
online
environment: No significant diflFerence? Quest, 53, 1-13.
Benson, A. (2002). Using online learning to meet workforce
demand: A case
study of stakeholder influence. Quarterly Review of Distance
Education, 3(4),
4 4 3 ^ 5 2 .
Benson, A. (2003). Dimensions of quality in online degree
programs.
American Journal of Distance Education, 170), 145-159.
Blackboard (1997). Blackboard (Version 6.2) [Computer
software].
Washington, DC: Blackboard Inc.
Bonk, C , Olson, T., Wisher, R., & Orvis, K. (2002). Learning
from focus
groups: An examination of blended learning. Journal of
Distance Education,
77(3), 97-118.
Boyle, T , Bradley, C , Chalk, P, Jones, R., & Pickard, P (2003).
Using
blended learning to improve student success rates in learning to
program.
Journal of Educational Media, 28{2/3), 165-178.
Brown, J. S., &C Duguid, P. (2000). The social life of
information. Boston:
Harvard Business School Press.
Buckley, D. (2002). Pursuit of the learning paradigm: Coupling
faculty
transformation and institutional change. Educause Review,
37(1), 28-38.
Cheng, H., Lehman, J., & Armstrong, P (1991). Comparison of
performance and attitude in traditional and computer conference
classes. The
American Journal of Distance Education, 5(3), 5 1 - 6 4 .
Cousin, G., & Deepwell, F. (2005). Designs for network
learning: A
communities of practice perspective. Studies in Higher
Education, 30(1),
57-66.
Journal of Research on Technology in Education 349
Cronbach, L. J. (1951). Coefficient alpha and the internal
structure of tests.
Psychometrika, 16, 297-334.
DeTure, M. (2004). Cognitive style and self-efficacy: Predicting
student
success in online distance education. American Journal of
Distance Education,
75(1), 21-38.
Donnelly, R. (2006). Blended problem-based learning for
teacher education:
Lessons learnt. Learning, Media, & Technology, 37(2), 93-116.
Dziuban, C , Hartman, J., & Moskal, P (2004). Blended
learning.
EDUCAUSE Center for Applied Research Research Bulletin.
Accessed online
January 21, 2007, at
http://www.educause.edu/ir/library/pdf/ERB0407.pdf
Fotilger, T (Summer 2005). Innovating professional
development standards:
A shift to utilize communities of practice. Essays in Education,
14, Retrieved
September 20, 2006, from
http://www.usca.edu/essays/voll4summer2005.html
Garrison, D. R. (1985). Three generations of technological
innovation in
distance education. Distance Education, 6(2), 235—241.
Gastfriend, H. H., Gowen, S. A., & Layne, B. H. (2001,
November).
Transforming a lecture-based course to an Internet-based
course: A case study.
Paper presented at the National Convention of the Association
for Educational
Communications and Technology, Atlanta, Georgia. ERIC
Document Number:
ED470085. Retrieved April 28, 2006.
Goleman, D. (1995). Emotional intelligence: Why it can matter
more than IQ.
New York: Bantam Books.
Graham, C. R., Allen, S., & Ure, D. (2005). Benefits and
challenges of
blended learning environments. In M. Khosrow-Pour (Ed.),
Encyclopedia of
information science and technology (pp. 253-259). Hershey, PA:
Idea Group.
Hitch, L. P. (2002). Being prepared for technology snow days.
ECAR
Research Bulletin, 24. Retrieved March, 2003, from
http://www.educause.edu/
LibraryDetailPage/666?ID=ERB0224
Husmann, D. E., & Miller, M. T. (2001). Improving distance
education:
Perceptions of program administrators. Online Journal of
Distance Learning
Administration, IV(III). Retrieved September 23, 2006, from
http://vvTvw.
westga.edu/-distance/ojdla/articles/fall2001/husmann43.pdf
Iverson, J. O., & McPhee, R. D. (2002). Knowledge
management in
communities of practice: Being true to the communicative
character of
knowledge. Management Communication Quarterly, 16(2),
259—266.
Johnson, S.D. & Aragon, S.R. (2002, Spring). An instructional
strategy
framework for online learning environments. Paper presented at
the Academy
of Human Resource Development (AHRD) Conference,
Honolulu, Hawaii.
Retrieved April 28, 2006, from ERIC database.
Kincannon, J. M. (2002, April). From the classroom to the Web:
A study
of faculty change. Paper presented at the Annual Meeting of the
American
Educational Research Association, New Orleans, Louisiana.
ERIC Document
Number: ED467096. Retrieved April 28, 2006.
Kuhn, T. (2002). Negotiating boundaries between scholars and
practitioners:
Knowledge, networks, and communities of practice.
Management
Communication Quarterly, 16{), 106-112.
350 Summer 2007: Volume 39 Number 4
Kvavik, R. (2005). Convenience, communications, and control:
How
students use technology. In D.G. Oblinger & J.L. Oblinger
(Eds.), Educating
the Net Generation (pp 7.1-7.20). Washington, DC: Educause.
Retrieved
January 8, 2005, from
http://www.educause.edu/educatingthenetgen/
LaRose, R., Gregg, J., & Eastin, M. (1998). Audiographic
telecourses for
the Web: An ex^crmcnx.. Journal of Computer-Mediated
Communication, 4(2).
Retrieved March 2, 2007, from
http://jcmc.indiana.edu/vol4/issue2/larose.html
Lave, J., &C Wenger, E. (1991). Situated learning: Legitimate
peripheral
participation. Cambridge: Cambridge University Press.
Learning Technology Center. (2002, March 12). Learning
technology center:
Our project. Retrieved May 2005 from
http://www.uwm.edu/Dept/LTC/our-
project.html
Leh, A. (2002). Action research on hybrid courses and their
online
communities. Educational Media International, 39(1), 31-38.
Levin, S. R., Levin, J. A., Buell, J. G. & Waddoups, G. L.
(2002).
Curriculum, Technology, and Educational Reform (CETER)
online: Evaluation
of an online master of education program. TechTrends, 46(5),
30—38.
Machtmes, K. & Asher, J. W (2000). A meta-analysis of the
effectiveness of
telecourses in distance education. The American Journal of
Distance Education,
7 4 ( 0 , 2 7 - 4 6 .
McCray, G. (2000). The hybrid course: Merging on-line
instruction and the
traditional classroom. Information Technology & Management,
1(4), p307-327.
Mezirow, J. (2000). Learning to think like an adult: Core
concepts of
transformation theory. In J. Mezirow & Associates (Ed.),
Learning as
transformation: Critical perspectives on a theory in progress
(pp. 3—33). San
Francisco: Jossey-Bass.
Miles, M.B. & Huberman, A.M. (1994). Qualitative data
analysis (2"'' ed.).
Thousand Oaks, CA: Sage Publications.
Misko, J. (1994). Flexible delivery: Will a client-focused
system mean better
learning^ KdA^idt: Adelaide National Centre for Vocational
Education
Research.
Moore, M. (1993). Is teaching like flying? A total systems view
of distance
cducdMon. American Journal of Distance Education, 7(1), 1-10.
Neuhauser, C. (2002). Learning style and effectiveness of
online and face-to-
face instruction. American Journal of Distance Education,
16(2), 99-113.
Nunnally, J. C. (1978). Psychometric Theory (2'"' ed.). New
York, NY: McGraw
Hill.
Olapiriyakul, K., & Scher, J. (2006). A guide to establishing
hybrid learning
courses: Employing information technology to create a new
learning experience,
and a case study. Internet & Higher Education, 9(4), 287-301.
O'Toole, J. M., & Absalom, D. (2003). The impact of blended
learning on
student outcomes: is there room on the horse for Vf/oii Journal
of Educational
Media, 28(215), 179-190.
Overbaugh, R., & Lin, S. (2006). Student characteristics, sense
of community,
and cognitive achievement in web-based and lab-based learning
environments.
Journal of Research on Technology in Education, 39(2), 205-
223.
Journal of Research on Technology in Education 351
Parkinson, D., Greene, W , Kim, Y, & Marioni, J. (2003).
Emerging themes
of student satisfaction in a traditional course and a blended
distance course.
TechTrends, 47(4), 22-28.
Phipps, R., & Merisotis, J. (1999). What's the difference? A
review of
contemporary research on the effectiveness of distance learning
in higher education.
Washington, DC: The Institute for Higher Education Policy.
Accessed online on
February 16, 2007 at
http://www.ihep.org/Pubs/PDF/Difference.pdf
Quality on the line: Benchmarks for success in internet-based
distance education.
(2000). Washington, DC: The Institute for Higher Education
Policy. Retrieved
September 16, 2006, from
http://www.ihep.com/Pubs/PDF/Quality.pdf#search
=%22Quality%20on%20the%20line%20benchmarks%22
Reeves, T., Herrington, J., & Oliver, R. (2004). A development
research
agenda for online collaborative learning. Educational
Technology Research and
Development, 52(4), 53-65.
RifFell, S., & Sibley, D. (2005). Using web-based instruction to
improve
large undergraduate biology courses: An evaluation of a hybrid
course format.
Computers & Education, 44(3), 217-235.
Rogers, A. (2002). Teaching adults (3"* ed.). Philadelphia:
Open University
Press.
Russell, T. (1999). The no significant difference phenomenon.
Chapel Hill, NC:
Office of Instructional Telecommunications, University of
North Carolina.
Sherer, P D., Shea, T. P, & Kristensen, E. (2003). Online
communities of
practice: A catalyst for faculty development. Innovative Higher
Education, 27(3),
183-194.
Sikora, A. (2002). A profile of participation in distance
education: 1999-2000.
Retrieved January 8, 2005, from National Center for Education
Statistics Web
site http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2003154
Skylar, A., Higgins, K., Boone, R., Jones, P, Pierce, T , &
Gelfer, J.
(2005). Distance education: An exploration of alternative
methods and types
of instructional media in teacher education. Journal of Special
Education
Technology, 20(3), 25-33.
Snell, C , & Penn, E. (2005). Developing an online justice
studies degree
program: A case study. Journal of Criminal Jtistice Education,
16(), 18—36.
Summers, J., Waigandt, A., & Whittaker, T. (2005). A
comparison of student
achievement and satisfaction in an online versus a traditional
face-to-face
statistics class. Innovative Higher Education, 29(3), 233-250.
Sumsion, J., & Patterson, C. (2004). The emergence of
community in a
preservice teacher education program. Teaching and Teacher
Education, 20(6),
621-635.
Swenson, P, & Evans, M. (2003). Hybrid courses as learning
communities.
In S. Reisman (Ed.), Electronic learning communities issues and
practices (pp.
27-72). Greenwich, CT: Information Age Publishing.
Thompson, M. (2005). Structural and epistemic parameters in
communities
of practice. Organization Science, 16(2), 151-164.
Twigg, C. A. (1999). Improving learning & reducing costs:
Redesigning large-
enrollment courses. Center for Academic Transformation.
Retrieved April 20,
2005, from http://www.thencat.org/Monographs/monol.pdf
352 Summer 2007: Volume 39 Number 4
Twigg, C. A. (2001). Innovations in online learning: Moving
beyond no
significant difference. Taylor, NY: Pew Learning and
Technology.
Twigg, C. A. (2003). Improving learning and reducing costs:
New models For
online learning. Educause Review, 35(5), 28-38.
Veronikas, S. W , & Shaughnessy, M.F. (2004). Teaching and
learning in
a hybrid world: An interview with Carol Twigg. Educause
Review, 39(July/
August), 51-62. Retrieved February, 2006, From
http://www.educause.edu/
apps/er/ermO4/ermO44.asp
Waits, T. & Lewis, L (2003). Distance education at degree-
grantingpostsecondary
institutions: 2000-2001 (NCES 2003-017). U.S. Department
oFEducation,
National Center For Education Statistics. Retrieved January 8,
2005 From http://
nces.ed.gov/pubsearch/pubsinFo.asp?pubid=2003017
Wenger, E. (1998). Communities of practice: Learning,
meaning, and identity.
Cambridge: Cambridge University Press.
Wenger, E., McDermott, R., & Snyder, W. M. (2002).
Cultivating communities of
practice: A guide to managing knowledge. Boston: Harvard
Business School Press.
Wilke, D., & Vinton, L. (2006). Evaluation of the first web-
based advanced
standing MSW prograsn. Journal of Social Work Education,
42(3), 607-620.
Woods, R., Baker, J., & Hopper, D. (2004). Hybrid structures:
Faculty use and
perception oF web-based courseware as a supplement to face-to-
Face instruction.
Internet & Higher Education, 7(4), 281-297.
Yang, Y, & Corneliotis, L.F. (2004, October). Ensuring Quality
in Online
Education Instruction: What Instructors Should Know? Paper
presented at the
Association For Educational Communications and Technology
conFerence, Chicago,
Illinois. ERIC Document Number: ED484990. Retrieved
September 23, 2006.
Young, S. (2006). Student views oFeffective online teaching in
higher education.
The American Journal of Distance Education, 20(2), 65—77.
Journal of Research on Technology in Education 353
APPENDLX: STUDENT/INSTRUCTOR HYBRID
EVALUATION
QUESTIONNAIRE
PART I: DEMOGRAPHIC QUESTIONS
What is your age? (Students only)
Where do you primarily access a
computer For schoolwork?
(Students onlv)
How do you most ohen col
Internet? (Students only)
Degree (Students only)
Current Semester
l.ourse I
('oiirse I itlc
LAST Name oi vour Instructor
POSSIBLE RESPONSES
Younger than 18
18 to 25
26 to 35
36 to 45
46 to 55
Older than 56
Home (desktop)
Mobile (laptop)
Student computer center
Library
Friend/Relative residence
Other
Home (high-speed)!
Home (dial-up)
Away-trom-honie (high-speed)
Away-from-home (dial-up)
Undergraduate
Graduate
Post-Baccalaureate
Elementary Education
Secondary Education
Special Education
Semester 1
Semester 2
Semester 3
ScnlC'̂ ter i
One
For this semester, how many of your courses ._,
incorporated online days? (Students only)
Five
For this course, approximately liow many ^ J
face-to-face days were replaced vviih online
activities this semester? (Instructors onlv) ^
354 Summer 2007: Volume 39 Number 4
pip |ooa S!t
S3.'>U3U3dx3 AlU
|OO1 SU(̂ J
nil
MM
MM
HIM
MM
I
c
o
T3
-o
^ o
E .5aQ
••̂ c c .S p
§ J! 3i =5 E
S g 2 .1 -̂
^ 4> O VI '•.':
O
O
LU
"3
o
u
•6
c
.2
I I
1 "̂
P u -r: -= - :
Journal of Research on Technology in Education 355
Summer 2007: Volume 39 Number 4
..2
— o
i i.
i2
ou
3 —
c
o
C C
o o
E
II
U
3
Journal of Research on Technology in Education 357
www.changemag.org 45
Christina Leimer ([email protected]) is associate vice president
for institutional
effectiveness at California State University-Fresno. Drawing on
her experience at two
universities and a community college, she conducts research,
writes, speaks, and consults
about organizing for evidence-based decision making and
improvement and how we
define, judge, and achieve effectiveness.
By Christina Leimer
ORGANIZING FOR
Evidence-Based
Decision Making and Improvement
In today’s accountability climate, regional accrediting bodies
are requiring colleges and universities to develop and sustain a
culture of evidence-based decision mak-ing and improvement.
But two-thirds of college presidents in a 2011 Inside Higher Ed
survey said their institutions are not particularly strong at using
data for making decisions. And despite accreditors’ intense
focus on learning outcomes as a core
piece of evidence of institutional effectiveness, a 2009 National
Institute for Learning
Outcomes Assessment (NILOA) survey revealed that 60 percent
of provosts believe that
they need more faculty engagement and more technical
expertise to strengthen the assess-
ment of learning on their campuses.
In my 17 years of working in institutional research, learning
outcomes assessment,
strategic planning, and accreditation, I have watched the
external demands for the col-
lection and use of information escalate. The response is often an
increased data flow that
occurs a year or so before the accreditors arrive and ebbs when
they leave campus. But I
have also experienced faculty and administrators becoming
enthusiastic as they engage
in the design, collection, analysis, and discussion of data and
the decision making that it
informs. When sustained, this enthusiasm for information about
institutional performance
becomes part of campus culture.
When this doesn’t happen, it is generally not for a lack of
expertise about how to con-
duct such research—colleges and universities are filled with
people who know how to do
just that. Nor is the problem solely about autonomy, although
that certainly plays a major
role in resistance to accountability demands, as do concerns
about unfair comparisons of
institutions and the difficulty of measuring complex skills and
organizational impacts.
Instead, two major elements are often missing that are necessary
to spark and sustain
evidence-based decision making and improvement. One is
leadership in making sense of,
strategically applying, and communicating data and findings to
diverse audiences in ways
that prompt organizational learning and stimulate people’s
desire to know more and then
to act on the information. The other is a highly visible
institutional research (IR) function
that is integrated with complementary functions such as
assessment and planning and that
is widely recognized as integral to shaping institutional policy
and practice.
46 Change • July/August 2012
The Role of leadeRship in evidence-Based
decision Making
Evidence-based decision making—which accreditors
expect colleges and universities to engage in continuously—
combines professional experience with data, research,
and literature to draw conclusions, make judgments, and
determine courses of action. When an information-based
mode of thinking and working is part of the culture, people
reflexively ask questions and search for relevant data before
deciding on a new program or developing initiatives. They
routinely evaluate learning, processes, and progress toward
goals to determine whether the programs and initiatives are
achieving the desired outcomes. In such a culture, reflecting
on practice and asking “how do we know?” is standard fare.
Developing such a culture takes sustained effort over a
long period of time at multiple levels of the organization.
But someone needs to take the lead—to advocate for, and
maintain focus on, this mode of thinking and practice. On
most campuses, no position or office is assigned this role.
An IR office and other operational units may provide
data, but this in itself does not promote their use, nor is their
application self-explanatory. For culture to change, someone
must turn data into information and institutional knowledge
through analysis and interpretation. Then someone needs to
be responsible for setting that knowledge in the context of
institutional goals and disseminating it in multiple formats
that are appropriate to particular stakeholders, in order to
inform recommendations and planning.
By participating in initial and ongoing discussions of pro-
grams and initiatives, personnel with research and evaluation
backgrounds can help frame questions so that they can be
answered empirically and relate to issues of concern. They
can then help communicate the results to the campus. Over
time, an accumulation of examples of the positive effects of
data use will help keep evidence-based decision making a
valued component of campus life.
IR offices can play a significant role in such change, yet
they are often underutilized. In a 1996 survey, 90 percent
of college presidents said they wanted their IR offices to be
proactive, but only half said that they were fulfilling this ex-
pectation. Some long-term IR professionals also recognize
that the conventional IR role is too narrow for the issues fac-
ing higher education today, but their offices may have insuf-
ficient staff or expertise to take on higher-level challenges. It
is often the case as well that campus leaders perceive IR as
a merely technical or reporting office that is too low in the
hierarchy to be involved in strategic discussions.
Whatever the reason, at most campuses, IR has neither
been assigned nor assumed a prominent role in culture
change. This is why I sometimes hear from senior admin-
istrators and faculty accreditation leaders that “more” than
IR is needed. But it is often unclear what that “more” is and
how to achieve it.
new Models foR fosTeRing evidence-Based
decision Making
Literature on the mechanics of learning outcomes assess-
ment and the technical aspects of conducting IR is volumi-
nous, but not many models exist for how to organize for ev-
idence-based decision making and improvement. During the
last 15 years, institutional effectiveness (IE) offices and units
have emerged as one response: The Directory of Higher
Education listed 43 IE offices in 1995 and 375 in 2010. The
number of IR offices increased during this period as well,
from 672 to 1,499. So what are the differences between these
two types of operation?
To find out, I analyzed 30 IR and 30 IE office websites to
examine their missions, structures, staffing, and responsibili-
ties; to identify similarities and differences between the two;
and to look for clues about why IE emerged.
The responsibilities and purposes of these offices differ
across campuses. In some cases, IE is simply a rebranding
of IR. In others, the primary responsibility of IE offices is
learning outcomes assessment. However, another configu-
ration attempts to fill needs beyond those of conventional
IR. In this arrangement, IE is an umbrella title for a unit or
department that performs multiple “quality” functions: IR,
planning, assessment, academic and administrative program
review, and accreditation.
To further investigate the purposes and operations of this
configuration, I conducted semi-structured phone interviews
with the lead managers of such offices at 19 US colleges and
universities. The purpose was to determine how and why
these offices began, how they are organized, why this con-
figuration was chosen, and its benefits and challenges.
The institutions studied include a range of public and pri-
vate institutional types, sizes, Carnegie classifications, ac-
creditation regions, and geographic locales. They are all not
for profit, with enrollments from less than 5,000 to 30,000,
although the sample contains more smaller than larger insti-
tutions.
In only six of the 19 cases were these offices designated as
IE; one was entitled IR. Most often, they had hybrid names
such as Institutional Research, Assessment, and Planning.
While many offices are named this way, they usually do
not have administrative oversight of all of these functions.
Instead, they provide data and research that supports some of
these functions, which are carried out by others.
When an information-based mode
of thinking and working is part of
the culture, people reflexively ask
questions and search for relevant
data before deciding on a new
program or developing initiatives.
www.changemag.org 47
Because the names vary, I refer to the configuration that
combines administrative responsibility for the quality func-
tions for the purpose of evidence-based decision making and
improvement as the integrated model (IM). Most of the of-
fices in my study had assumed an integrated form within the
last nine years, and they were still changing as they adapted
to new needs.
The inTegRaTed Model (iM)
The IM model is a solution to a need for culture change
that exceeds the capabilities of conventional IR offices
to support. While they still analyze data, IM offices take
more of a leadership role than conventional IR ones do. IM
personnel educate and advocate for the use of evidence in
decision making. They may also bring their knowledge of
external trends and issues affecting higher education and
their institutions into presentations, analyses, and discus-
sions in ways that can help challenge assumptions, deepen
questioning and exploration, and prompt reflection that can
lead to change.
Personnel in these offices advise and consult with execu-
tives, middle managers, and faculty. They coordinate, facili-
tate, and develop processes, procedures, and structures that
help make data use part of the culture, such as workshops,
blogs, research review teams, or linkages between assess-
ment and planning. They monitor and document progress to-
ward strategic planning goals and play a key role in program
review or accreditation. Evaluating initiatives and programs
or partnering with operational managers to do so is common.
IM office personnel may participate in establishing insti-
tutional goals through committee memberships, consulting
with managers, and/or facilitating goal-setting processes
such as retreats, forums, or other planning activities. They
offer methodological training to managers and faculty to
help them assess performance in their own areas.
In assuming responsibility for encouraging the use of re-
sults, these offices act as catalysts for change. For instance,
they may initiate opportunities to engage constituents in the
institution’s research agenda. Doing so creates familiarity
with the process, demonstrates its value, garners support,
and improves the quality of research and evaluation by
bringing diverse perspectives to complex questions.
By linking the use of evidence to problems of interest to
constituents, they may be able to spark curiosity and influ-
ence attitudes and perspectives that help develop an appre-
ciation for data use. Integrating these functions coordinates
a set of tools that helps executives, senior managers, and
faculty identify where the organization is successful and
where it is lagging, thereby helping to focus on internal im-
provement.
At many colleges and universities both IR and assessment
offices are chronically understaffed, as presidents respond-
ing to a 1996 survey acknowledged. Despite 15 years of
increasing demands, most IR offices are still one- or two-
person departments, and in a 2009 survey, NILOA found few
resources devoted to learning outcomes assessment. In such
cases, staffing may need to increase.
However, integrating quality-improvement functions and
drawing on their natural fit, respondents in the study said,
creates greater efficiency, better products, synergies, and
focus. So while the configuration does not allow for fewer
staff, the office’s productivity may well increase. Staff in in-
tegrated offices can more equitably distribute their work and
make better use of individuals’ strengths. Bringing together
their multiple skills and perspectives allows for richer analy-
ses and a larger view of institutional issues and provides op-
portunities for staff to learn from each other. This is helpful
for developing the implications of and contextualizing data
and other research findings.
Uniting complementary skill sets creates another benefit.
In general, IR professionals have stronger technical skills
than assessment professionals, and assessment professionals
possess better interpersonal skills than their IR colleagues.
Both skill sets are needed, but the combination may be dif-
ficult to find in one individual. When assessment and IR pro-
fessionals work together, the products and services they can
offer become a stronger force for change.
In addition to this greater tangible value, a high-visibility
department whose responsibilities reflect the organization’s
commitment to effectiveness can keep this method of operat-
ing in collective awareness. Personnel who find opportunities
to consult, providing user-friendly information and engaging
in ongoing discussions of institutional goals and problems,
create an effectiveness orientation and normalize the use of
evidence in making decisions.
Changing culture is a complex undertaking that requires
ongoing effort from many people in different parts of the
organization using their various types of authority and influ-
ence. The IM office can be a crucial participant in this effort.
Building capaciTy
Integrating Functions
Integrating the quality functions can fill both the leader-
ship and infrastructure gaps that impede data-informed deci-
sion making and the development of a culture of evidence
and improvement. But to do so, institutions first need to take
stock of the existing functions, their current locations, and
the extent to which they are collectively performing culture-
development tasks. Not only will this illuminate gaps in
responsibilities and institutional impediments to change—it
may identify personnel with unused skills who can be culti-
vated or professionals who want to expand their skill sets.
The 19 IM offices studied included some similar ele-
ments. The majority combined IR, assessment, and
IM personnel educate and
advocate for the use of evidence
in decision making.
accreditation. Nine had strategic-planning responsibilities as
well, and five included academic and/or administrative pro-
gram review. Some offices performed additional functions,
such as institutional budgeting, business intelligence, grant
management, market research, and the student evaluation of
teaching.
All of the offices in this study combined their chosen
set of functions in a centralized unit, although the compo-
nents included varied. One research university, for example,
merged IR, learning outcomes assessment, program evalu-
ation, decision support, and business intelligence into a
single unit. An undergraduate teaching university combined
IR, learning outcomes assessment, strategic planning, ac-
creditation, testing, program review, and university relations
and communications in creating the IM office. Although it
was not part of my study, perhaps the oldest and best-known
integrated unit is at Indiana University–Purdue University
Indianapolis, where the division is called Planning and
Institutional Improvement. Its functions include IR, informa-
tion management, institutional planning, learning outcomes
assessment, program review, economic modeling, and the
testing center.
Integration can be achieved in a more decentralized
manner as well. At my own institution, California State
University–Fresno, IR and learning outcomes assessment
are the functions of the IM office, but strategic planning is
located with the president, accreditation with the associate
provost, and academic program review with the undergradu-
ate and graduate deans. My membership on the strategic
planning committee, the accreditation core team, and aca-
demic program-review teams allows me to apply the tools of
my office to university goals and quality-assurance processes
and to recommend ways to improve them.
In addition, these functions link up in various ways. For
example, ongoing learning outcomes assessment is incorpo-
rated into periodic program review through the self-study.
Like most of the offices in my study, we continue to develop
mechanisms that strengthen these connections as this six-
year-old configuration evolves.
Developing the Structural Configuration
The majority of the offices in the study were developed
intentionally, prompted by accreditation requirements and/
or the vision of the president. Four have evolved in this di-
rection over time, and five are being developed on the fly as
needs arise. In some cases, the offices are brand new; in oth-
ers, existing offices are being expanded or multiple offices
merged.
Executives’ authority and engagement in evidence-based
decision making is critical to planning and developing such
a configuration. The lead manager of an IR, assessment, or
planning office may propose a plan, as happened at some in-
stitutions in the study, but only presidents and provosts have
the authority to change infrastructure, allocate resources, and
set institutional priorities and direction. Therefore, they must
visibly take the lead in establishing such a configuration and
must support this new approach on an ongoing basis to en-
sure that it succeeds in influencing culture.
Crucial to this support is a multi-year plan in which the
unit is incorporated into relevant decision-making venues,
responsibilities are shifted or added, staff are relocated or
hired, and personnel and office titles are changed as needed.
Depending on need, circumstances, and resources, the con-
figuration can be created gradually or rapidly.
An existing IR or assessment office usually serves as the
nucleus to which the other functions are added. Independent
offices in different divisions are sometimes merged into a
single unit. Especially in large colleges and universities, in-
dividual staff members often perform IR or assessment work
in an operational unit such as a registrar’s or dean’s office.
Moving them into the new IM office may be an option that
provides them with colleagues from whom they can learn
and gain support while adding capacity or gaining efficien-
cies. Any type of organizational relocation or merging of
offices and personnel requires ongoing attention to facilitate
a smooth transition, help a group of individuals coalesce into
a team, and ensure that managers who lose staff members
continue to get their needs met.
Two cautions should be kept in mind in developing and
managing an integrated office. First, the focus and efficiency
of such an office can be unintentionally diluted by add-
ing responsibilities that detract from the goal of fostering
an evidence-based decision-making culture. One manager
described her developing office as attracting programs that
were not working with the expectation that she would im-
prove them. Another said that activities for which there was
no other home or that no one else wanted were given to her
department. This vulnerability is heightened in offices that
evolve organically and in ones that have been renamed IE
without their responsibilities being clearly defined.
The second caution is to ensure that the new configura-
tion does not become a super-compliance office, orienting
the majority of its activity to external accountability rather
than to internal improvement. One manager suggested that
accreditation should be excluded from this new arrange-
ment because it carries so much weight that it could have
this effect.
48 Change • July/August 2012
Bringing together their multiple
skills and perspectives allows for
richer analyses and a larger view
of institutional issues and provides
opportunities for staff to learn from
each other.
www.changemag.org 49
Reporting Lines
Almost all of the lead managers of these offices reported
directly to the president or the chief academic officer—one
to both. Only one of them reported below the vice-presiden-
tial level. Access to high-level decision makers is impor-
tant to IM managers’ ability to work across organizational
boundaries and stay abreast of institutional issues on which
they can bring the tools of their offices to bear. More than
half of the lead managers in my study held a title higher than
director, ranging from senior director to vice president. Five
were members of their presidents’ cabinets.
Naming
As mentioned, there is no consistency in the titles of these
offices. It appears that the name IE reflects accreditors’ em-
phasis on demonstrating effectiveness, regardless of the spe-
cific set of tasks or functions that comprise the office.
Among the offices in the study that were named IE or for
which there were plans to do so, the title was chosen for two
reasons. First, it reflected the purpose for which the tools of
planning, research, and assessment were going to be used—
institutional improvement and effectiveness, as institutionally
defined—rather than focusing on the tools themselves. The
other reason was because the responsibilities were intended
to be broader than those of any of the functional areas.
Tidewater Community College is an example of an institu-
tion that uses the name IE for the entire unit, without losing
the titles that connote specific functions: Departments within
the unit are called IR and Student Outcomes Assessment
(SOA). The unit manager’s title is director of IE, while staff
positions are designated with a title followed by the quali-
fiers IR or SOA.
While there may be institution-specific reasons for par-
ticular titles, the dissimilarities and incongruencies between
titles and responsibilities across campuses is detrimental in
multiple ways. Making institutional comparisons and locating
models that are the most efficient and effective is nearly im-
possible when they cannot be identified by a common title.
Hiring also becomes more difficult because the department
and staff titles candidates have can include skills and experi-
ences that may be quite different from those the hiring man-
ager might expect based on those titles. The scope of respon-
sibilities of individuals with the same title can vary widely,
as can their salaries. In the current climate, where financial
considerations are paramount at most institutions, greater
consistency in the titles of units, personnel, and responsibili-
ties would help make organizing for evidence-based decision
making more effective and perhaps less costly.
Staffing
As is true of freestanding IR and assessment offices, staff-
ing is a challenge in integrated offices. The primary issue is
too few staff and, to a lesser degree, insufficient expertise
of the existing staff. Several of the offices in the study were
understaffed, usually by at least one position. And even when
funding is allocated for positions, experienced IR and as-
sessment professionals are difficult to find.
In implementing an integrated model, campus leaders
should conduct an analysis to determine workload require-
ments and gaps in functioning. Personnel will need to be
trained to fill those gaps, and additional staff may be needed.
However, it is also possible that some tasks can be elimi-
nated or shifted to other departments to optimize the use of
existing personnel.
For instance, a common complaint among IR profession-
als is that external reporting requires so much of their time
that they cannot use their research methods and statistical
skills for institutional improvement. Not only does this mis-
match rob the institution of the full value these professionals
can offer—it is a reason many new IR professionals leave
the field. In light of changes in computing technology that
allow pre-packaged reports to be developed, it may be pos-
sible to shift some external reporting to other departments,
such as the operating unit that generates the data or to the
information-technology unit.
Since the range of responsibilities in integrated offices is
broader than those of a typical IR office, so are the skills,
abilities, and personal traits that lead managers in IM of-
fices need. To varying degrees, experience with and skills in
research methods, statistical techniques, data analysis, statis-
tical software, and database management are fundamental.
But organizational, project-management, group-facilitation,
and written and oral communication skills are important too,
as are strong interpersonal skills that enable these managers
to work effectively with a range of institutional constituents,
from line staff and faculty to middle managers and execu-
tives. The abilities to build consensus, negotiate, communi-
cate in non-technical language, coordinate people and proj-
ects, and lead are key.
Personal characteristics needed include sensitivity, open-
mindedness, flexibility, a capacity to listen, enthusiasm, a
commitment to learning, a sense of humor, the ability to
build others’ self-confidence and motivate them, creativity,
A common complaint among
IR professionals is that external
reporting requires so much of
their time that they cannot use
their research methods and
statistical skills for institutional
improvement.
50 Change • July/August 2012
team-building and problem-solving capacities, a thick skin, a
tolerance for ambiguity, and patience. So too are the abilities
to educate, build trust, and use data to tell a compelling story.
It is essential that IM professionals know what data are
available and how they can be applied, as well as which
methodologies can be used to answer questions. They need
to understand the types of problems higher education man-
agers must address, how colleges and universities operate,
and how decisions are made there. They need to understand
the political world of academia and how to work with oth-
ers to reach institutional goals. They need to comprehend
higher education culture and the culture of their particular
institutions, as well as the external environment at the local,
regional, national, and even international level as it impinges
on institutional operations, problems, and goals.
Developing a solid understanding of the intricacies of
institutional data and their appropriate use at a particular
institution takes years. Consequently, these offices will be
better able to assist with institutional improvement and goal
achievement if they retain early-career professionals.
Most of the lead managers in the study were long-term IR
or assessment professionals—primarily the former—with
at least 10 years of experience. A little more than half had
worked in the field at least 15 years. Half had developed
their skills and knowledge within their institution; the other
half were recruited externally. Most seemed to revel in the
challenges and opportunities of changing culture.
These professionals’ combination of technical, interper-
sonal, and organizing skills allowed them to shape their new
offices and positions. When recruiting candidates to lead
integrated offices, the Association for Institutional Research
IM professionals…need to
understand the types of
problems higher education
managers must address, how
colleges and universities operate,
and how decisions are made there.
Leadership in integrated offices is more strategically than
technically oriented. In an article that has become a classic in
the IR profession, Terenzini (1999) describes three forms of
intelligence that are necessary for high-performing IR staff:
technical/analytical skills, a knowledge of the issues, and
contextual intelligence. Leadership that can influence culture
requires all three; I describe how each can affect culture change
in the right-hand column of the following table.
Intelligence Type Actions
Technical/Analytical • Select appropriate
institutional data and assure the accuracy of
their use
• Offer technical, research, and assessment
expertise
• Demonstrate utility of data-driven
decisions to campus constituencies
Issues • Combine research capability
and familiarity with the campus
community and its issues for richer analyses
• Explicate the implications of
research findings and make evidence-based
recommendations
• Strengthen planning and program
development
• Anticipate stakeholders’ needs
• Collaboratively frame and refine
questions and focus possibilities for
change
Context • Communicate institutional issues in
a broader context
• Bring knowledge of higher
education trends and issues to internal
discussions to expand awareness
• Apply knowledge and research
findings to challenge institutional
assumptions, prompt reflection, and stimulate change
• Utilize institutional alliances and
understanding of the institution’s
culture to spread the use of data in decisions
TaBle 1. TeRenzini’s Typology and iTs effecTs on culTuRe
change
www.changemag.org 51
n Davenport, T. H., Harris, J. G., & Morison, R.
(2010). Analytics at work: Smarter decisions, better
results. Boston, MA: Harvard Business Press.
n Green, K. C., Jaschik, S., & Lederman, D. (2011).
Presidential perspectives: The 2011 Inside Higher Ed
survey of college and university presidents. Inside
Higher Ed. Retrieved from http://www.insidehighered.
com/sites/default/archive/storage/files /SurveyBooklet.
pdf
n Harrington, C. F., Christie, R. L., & Chen, H. Y.
(1996). Does institutional research really contribute to
institutional effectiveness? Perceptions of institutional
research effectiveness as held by college and univer-
sity presidents. Paper presented at the 36th Annual AIR
Forum, Albuquerque, NM, May 5–8.
n Knight, W. E., & Leimer, C. (2009). Will IR staff
stick? An exploration of institutional researchers’ inten-
tions to remain in or leave their jobs. Research in Higher
Education, 51(2), 109.
n Kuh, G., & Ikenberry, S. (2009, October). More than
you think, less than we need: Learning outcomes as-
sessment in American higher education. Urbana, IL:
University of Illinois and Indiana University, National
Institute for Learning Outcomes Assessment (NILOA).
Retrieved from http://www.learningoutcomeassessment.
org/MoreThanYouThink.htm
n Leimer, C. (2009). Taking a broader view: Using in-
stitutional research’s natural qualities for transformation.
New Directions for Institutional Research, 143, 85–93
n Leimer, C. (2010, July). Wave of the future?
Integrating institutional research, outcomes assessment,
planning, program review and accreditation. Education
Resources Information Center, ED521064. Retrieved
from http://20.132.48.254/ERICWebPortal/search/re-
cordDetails.jsp?ERICExtSearch_Descriptor=%22Cour
se+Descriptions%22&ERICExtSearch_Facet_0=facet_
de&ERICExtSearch_FacetValue_0=%22Higher+Educat
ion%22&_pageLabel=RecordDetails&accno=ED52106
4&_nfls=false
n Leimer, C. (2011). The rise of institutional effective-
ness: IR competitor, customer, collaborator, or replace-
ment? AIR Professional File, 120.
n Morest, V. S. (2009). Accountability, accreditation
and continuous improvement: Building a culture of evi-
dence. New Directions for Institutional Research, 143,
17–27.
n Terenzini, P. T. (1999). On the nature of institutional
research and the knowledge and skills it requires. New
Directions for Institutional Research, 104, 21–29.
n Volkwein, J. F. (2011, March 24). IR roles, respon-
sibilities and reporting lines. [Webinar]. Tallahassee,
FL: Association for Institutional Research, Professional
Development Services Committee.
Resources
(AIR) and possibly the Society for College and University
Planning would be reasonable places to contact, as would the
AIR regional affiliates and regional accreditors.
However, since IM offices are a recent phenomenon and
the lead-manager role is a relatively new one, there is no for-
mal training for developing and supporting leaders in deal-
ing with the challenges of this emerging area. As this form
of organizing grows, more professionals with this complex
skill set will be needed.
Because these professionals work closely with executives
and across divisions and hierarchy, an executive-level leader-
ship training program that addresses the challenges of a high
level of ambiguity and a lack of direct operational authority
in negotiating, mediating, facilitating, and changing culture
would help mid-career IR and assessment professionals take
on this role. In addition, it would offer the possibility of a
career ladder to new professionals, many of whom leave IR
within a few years, in part because opportunities for career
advancement are typically scarce.
Establishing Role Boundaries
Creating anything new involves ambiguities and raises
questions; hence, careful consideration must be given to
defining the IM manager’s boundaries and authority. The
primary responsibility for goal-setting and evaluation within
particular units or departments should remain with the op-
erational manager and, in the case of learning outcomes and
academic program review, with the faculty.
IM managers should not take on sole responsibility for
assessing everything or for overseeing quality in general.
Instead, they should make recommendations and develop
processes, structures, and policies as a member of an execu-
tive team.
Advocating for use of evidence in decision making and
institutional improvement and educating about how to do so
is the most central role of an IM office. Effective advocacy
requires a nuanced understanding of the institution’s culture
and people in order to know when to make the best use of
positional or expert authority, when to draw on relationships,
and when to utilize high-profile or low-key approaches.
Making research-based recommendations for change is an
aspect of the role, but institutional decision making rests
with those who are charged with the responsibility.
conclusion
When external environments become more complex
and demanding, internal administrative structures usually
become more complex to deal with those demands. Better
access to data and more of it will not, in itself, meet the pub-
lic’s expectations for higher education to be accountable and
effective. It requires instead changes in both organizational
structure and leadership. Integrating institutional research,
learning outcomes assessment, strategic planning, program
review, and accreditation can help colleges and universities
achieve the culture of evidence and improvement needed to
respond to external demands and move the institution into its
chosen future. C
Copyright of Change is the property of Taylor & Francis Ltd
and its content may not be copied or emailed to
multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users
may print, download, or email articles for individual use.
1
This chapter provides tools, resources, and examples for
engaging
qualitative inquiry as a part of institutional research and
assessment. It supports the development of individual ability
and
organizational intelligence in qualitative inquiry.
A Qualitative Toolkit for Institutional
Research
Chrystal A. George Mwangi, Genia M. Bettencourt
As an institutional researcher, Sam has just finished analyzing
the results of
their institutions’ most recent campus climate study. The
quantitative find-
ings show clearly that Students of Color have negative
experiences both
within academic courses and co-curricular involvement.
Students of Color
responded in high numbers to questions about microaggressions
on cam-
pus, indicating that these pervasive acts of racism permeate
their daily ex-
periences. Students of Color were also more likely to report
feeling isolation
on campus and dissatisfaction with the institution. Sam wants to
know more
about microaggressions on campus to be able to understand
their different
manifestations, the impact they have on Students of Color, and
potential
strategies for intervention. To meet these goals, Sam decides to
conduct
qualitative research centered on the voices of these students
experiencing
microaggressions.
Qualitative research is the result of many different decisions, all
of
which are made within unique contexts. To illustrate these
decisions and
contexts, we use the example of Sam throughout this chapter.
Like Sam,
many institutional researchers find they need to integrate
traditionally
quantitative approaches with qualitative methodologies to
obtain the full
picture of student experiences in higher education. Qualitative
methods
naturally align with institutional inquiry that focuses on
students’ experi-
ences within a certain context or set of conditions (Harper &
Kuh, 2007). As
institutions engage in increasingly complex data-driven
decision-making,
“the best decisions are based on a deeper understanding than
quantitative
methods alone can provide” (Van Note Chism & Banta, 2007, p.
15). As
such, it is crucial for institutional researchers and institutional
research of-
fices to develop qualitative expertise to support methodologies
and meth-
ods that can be applied to a spectrum of research questions
(McLaughlin,
McLaughlin, & Muffo, 2001). This chapter provides tools,
resources, and
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH, no.
174 © 2017 Wiley Periodicals, Inc.
Published online in Wiley Online Library
(wileyonlinelibrary.com) • DOI: 10.1002/ir.20217 11
12 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
examples for effectively grounding and conducting qualitative
inquiry as
a part of institutional research and assessment. We review key
qualitative
skills and knowledge areas such as research paradigms,
methodologies, and
methods.
Paradigms
Paradigms, also known as worldviews, are “systems of beliefs
and practices
that influence how researchers select both the questions they
study and
methods that they use to study them” (Morgan, 2007, p. 50). All
types of
research are rooted in researchers’ paradigms. Paradigms
emerge out of re-
searchers’ epistemology, ontology, and axiology, shaping how
knowledge is
sought out and interpreted. These approaches shape the choices
a researcher
makes in what and how to pursue their topic.
Although there are multiple classifications of paradigms, for
simplicity,
we utilize four overarching categories (Creswell, 2014;
Mertens, 2015):
The positivist paradigm focuses on explaining, testing, and
predicting phe-
nomena (Guido, Chávez, & Lincoln, 2010). Information is
objective and
value-free, and exists within one true reality. This paradigm has
evolved
into postpositivism by incorporating a more critical lens to
examine how
a cause determines an effect or outcome (Creswell, 2014). In
the former,
a researcher might conduct a study to prove a hypothesis is
correct and
to discover the truth. In the latter, researchers aim to reject a
null (false)
hypothesis to move closer to the truth.
The constructivist, or interpretive, paradigm views knowledge
as socially
constructed and individuals’ experiences as framed by their
unique con-
text. Individuals have a subjective reality based on
understanding their
views (Creswell, 2014). Instead of a universal Truth, there are
only truths
that exist for individuals that are reliant on their context and
time (Guido
et al., 2010).
The critical, or transformative, paradigm can incorporate
numerous the-
ories that examine the experiences of marginalized individuals
and
unequal distributions of power. This approach tends to
emphasize col-
laborative research processes to avoid perpetuating power
imbalances
(Creswell, 2014). These approaches look to restructure the
status quo,
with the goal of social change. Critical designs may utilize
nonhierarchi-
cal methodologies that aim to involve participants as co-
researchers on
investigating a problem and implementing change, such as
participatory
action research. More widely, critical researchers also cite this
paradigm
as a way of interpreting results.
The pragmatic paradigm emphasizes that researchers choose the
meth-
ods, processes, and tools that best answer the research question
at hand
(Creswell, 2014). Pragmatic paradigms are most commonly
associated
with mixed-methods research.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 13
Sam is interested in engaging in-depth with student voices and
expe-
riences, to understand how their experiences on campus are
informed by
their interactions with others, their daily lives, and their social
identities. As
such, Sam identifies that their research is rooted in a
constructivist paradigm
that prioritizes the context of diverse groups of students to learn
more about
their experiences and perspectives.
Crafting Questions
Qualitative data can provide a great deal of information, some
of which may
be beyond the scope and nature of what the researcher wants to
investigate.
Like research paradigms, crafting a research question(s) helps
to constrain
the scope of a study. Research questions provide guidance for
one’s inquiry
and require a response that emerges from data and analysis.
When a study
becomes overwhelming, it is important to remember that a
primary goal is
to answer the research question(s). Good research questions
stem from the
purpose of the study. Consider whether the research purpose is
to describe a
phenomenon or explain and theorize about it (Marshall &
Rossman, 2006).
Is it to explore a problem that has not been previously examined
or to em-
power others and create greater equity (Marshall & Rossman,
2006)? An-
swering these can help determine how to craft the research
question(s). The
methodology is another way to help develop the research
question(s). For
example, an ethnographic study often incorporates a question
about cul-
ture. Similarly, a theoretical/conceptual framework may also
influence the
nature of the question(s).
Qualitative research questions are distinct from quantitative
research
questions in that they tend to ask: How? and/or What?
Qualitative research
questions often do not begin with “Why?” because this tends to
be driven
by cause and effect or a quantitative purpose. It is important
that qualitative
research questions cannot be answered with a simple yes, no, or
one-word
discrete answer. They should balance breadth and specificity.
For example,
a researcher may want to ask a question that will solve a major
problem on
campus. However, given the complexity of that problem, the
study may not
be able to solve it. Instead, ask questions that engage the larger
problem
by contributing to its solution or that contribute to a better
understanding
of the problem. The question(s) should be feasible and
researchable given
one’s resources, skills, and knowledge (Lawrence-Lightfoot,
2016). As with
other parts of qualitative inquiry, the development of a research
question
can also be an iterative process. In fact, Stage and Manning
(2016) state,
“Rarely is a research question as clear in the beginning of the
study as it is
at the end” (p. 8). Therefore, researchers can change or revise
the research
question (or add subquestions) as the study progresses and the
data emerge.
Sam asks two research questions:
1. How do Students of Color experience microaggressions on
campus?
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
14 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
2. What impact do Students of Color perceive microaggressions
have on
their college experience?
The first question allows for the collection of data that
describes occur-
rences of microaggressions toward Students of Color and
focuses on these
students’ lived experience. Although qualitative data cannot
produce “cause
and effect” findings, they can elucidate the perceived impact of
an action.
The second question will lead Sam to collect data that describes
the way
that Students of Color feel affected by microaggressions to
demonstrate the
severity of the problem and inform campus interventions.
Overview of Methodologies
Methodologies demonstrate branches of knowledge and
strategies of
inquiry that influence research choices (Patton, 2015). They are
the
guideposts that help a researcher ground a study and shape
additional
components of the research design. Although some studies
claim a generic
qualitative approach without selecting a methodology, thinking
systemat-
ically about methodology can help researchers to align research
questions,
data collection processes, and data analyses (Patton, 2015).
There are
many different qualitative methodologies, but here we have
selected four of
the more common in higher education research: case study,
ethnography,
grounded theory, and narrative inquiry.
Case Study. Case study is an appropriate method when the re-
searcher wants to explore contextual conditions that might be
critical to
the phenomenon of study (Yin, 2003). Within this approach, it
is essential
to define the boundaries of a case, which are set in terms of
time, place,
events, and/or processes (Merriam, 1998; Yin, 2003). The case
(also de-
scribed as a bounded system or unit of analysis) is the focus of
the study
(Merriam, 2009). Case study researchers utilize several sources
of informa-
tion in data collection to provide in-depth description and
explanation of
the case (Merriam, 2009). Research can be comprised of a
single case or
multiple cases that are analyzed and/or compared. There are
different types
of case studies. For example, a descriptive case study generates
a rich, thick,
and detailed account that conveys understanding and
explanation of a phe-
nomenon (Merriam, 1998). Interpretive case studies go beyond
describing
the phenomena to present data that support, challenge, or
expand existing
theories (Merriam, 2009). Finally, exploratory case studies help
to deter-
mine the feasibility of a research project and solidify research
questions and
processes (Yin, 2003).
Ethnography. Situated within the field of anthropology, ethnog-
raphers seek to understand and describe cultural and/or social
groups
(Spradley, 1979). Ethnographic studies examine individuals and
groups in-
teracting in ordinary settings and attempt to discern pervasive
patterns such
as life cycles, events, and cultural themes. Ethnography
describes a culture-
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 15
sharing group, uses themes or perspectives of the culture-
sharing group
for organizational analysis, and seeks interpretation of the
culture-sharing
group for meanings of social interaction (Spradley, 1979).
Ethnography as-
sumes that the principal research interest is largely affected by
community
cultural understandings. Thus, “ethnographies recreate for the
reader the
shared beliefs, practices, artifacts, folk knowledge, and
behaviors of some
group of people” (LeCompte, Preissle, & Tesch, 1993, pp. 2–3).
Ethnogra-
phy can be emic (focused on the perspectives of the group under
study),
etic (focused on the researcher/outsider perspective), or blend
the two ap-
proaches. The ethnographic process of inquiry suggests
prolonged observa-
tion within a natural setting and in-depth interviews.
Ethnographic studies
also define the researcher as a key instrument in the data
collection process,
who describes and interprets observations of the cultural group
(Mertens,
2015).
Grounded Theory. Grounded theory is an explanatory
methodology
developed to construct theory that emerges from and is
grounded in data
(Glaser & Strauss, 1967). Through this process, researchers can
create a
substantive theory, which is a working theory for a specific
social process
or context (Corbin & Strauss, 2008; Strauss & Corbin, 1998;
Glaser &
Strauss, 1967). Grounded theorists do not use theoretical
frameworks and
historically have sought to limit a priori knowledge of the
problem being
studied (Glaser & Strauss, 1967), but more recent approaches
have em-
phasized the need for sensitizing concepts, or ideas from extant
literature,
to provide a structure for inquiry (Charmaz, 2014). This allows
for sub-
stantive theory to be created inductively, from the data.
Grounded theory
is also defined by its sampling and data analysis procedures.
Grounded
theory researchers use theoretical sampling by selecting
participants based
on relevant constructs and participants’ experience with the
phenomenon
under study, rather than solely demographic criteria (Strauss &
Corbin,
1998). Researchers should use data from their initial sample as
a guide for
recruiting additional participants to provide data to address
emerging cate-
gories (Charmaz, 2014; Corbin & Strauss, 2008; Strauss &
Corbin, 1998).
When new data from the sample no longer add to a category or
concept,
the study has reached theoretical saturation and the sampling
process ends.
Grounded theory is also known for the constant comparative
method of
analysis in which data are iteratively collected and compared to
emerging
categories through a coding process (Strauss & Corbin, 1998).
The con-
stant comparative method will be further explained in the Data
Analysis
section.
Narrative Inquiry. Narrative inquiry centers on telling a story
or stories and thus “takes as its object of investigation the story
itself”
(Riessman, 1993, p. 1). Researchers using this methodology
organize the
narrative of a single participant or narratives of multiple
participants to
share, shape, and connect their experiences (Chase, 2011).
Chronology
and timeline are central features of narrative inquiry (although
narratives
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
16 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
themselves do not need to follow a linear story). In addition,
this method-
ological approach often involves multiple, in-depth interviews
and/or other
data such as existing documents, and necessitates a reflexive
relationship
between researchers and their participants in order to re-tell
stories
through empirical findings (Chase, 2011). Data collection
methods for this
approach should allow for telling by the participant(s),
interpretation of
the experience(s) by the researcher, representation of the story
or stories,
and reflection on assumptions made about the self while
engaging in
telling and re-telling the narratives (Jackson & Mazzei, 2013).
There are
many forms of narrative inquiry, including oral histories,
biographies,
testimonies, and memoirs. Given Sam’s interest in focusing on
the voices of
Students of Color regarding microaggressions, they select
narrative inquiry.
This methodology can use participants’ stories to expose
oppressive actions
(Chase, 2011). Narrative inquiry will shape the study’s
emphasis on exam-
ining students’ experiences with microaggressions throughout
their time at
the university and in eliciting specific examples or stories,
related to those
experiences.
Tools for Data Collection
The main types of data collection in qualitative research include
partici-
pant observation, individual interviews, and focus groups
(Guest, Namey,
& Mitchell, 2013). The research questions and methodologies
may lead to-
ward a certain type of data collection, or a study that combines
multiple
approaches to gather data (multimodal design). All three
approaches re-
quire some initial planning beyond crafting questions to include
establish-
ing a location, obtaining any necessary tools prior to
implementation (e.g.,
recording devices), and dedicating time immediately afterward
to process
through initial reflections and analysis (Guest et al., 2013).
Observations. Observations are typically the result of the re-
searcher’s experiences in a given situation or environment. As
opposed to
direct observation, like the detail recovered by a video camera
or a two-
way mirror, participant observation includes the researcher as a
part of the
environment directly absorbing and processing information
(Guest et al.,
2013). Researchers are engaged in the environment by taking
notes, record-
ing their environment, and asking questions to uncover meaning
(Guest
et al., 2013). This form of data collection is used to discover
complex inter-
actions in social settings (Marshall & Rossman, 2006). By being
in a space
where the topic of interest occurs, researchers record the
behavior of inter-
est as it happens and to provide context (Merriam, 2009). The
degree of
what a researcher can observe may be determined by the
relationships they
have in the community, the access they negotiate, and the
amount of time
spent gathering data (Guest et al., 2013).
In observations, the goal of the researcher is to record field
notes
with a high degree of detail. These notes involve physical
surroundings,
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 17
context, people, and their actions (Neuman, 2006). Prior to
beginning
observations, the researcher should choose an organizational
system that
will allow for tracking direct observations with inferences,
analysis, and
personal journaling (Neuman, 2006). Although many of these
notes are
conducted during the observation, the researcher should also
budget time
shortly after finishing the observation to jot down additional
notes. The
time after observation may be used to create analytic memos in
which to
record plans, reflect on ethical decisions, and create maps or
diagrams of
occurrences or relationships (Neuman, 2006). Although
observations may
involve a large time commitment of many hours, as a form of
data collec-
tion they allow for a researcher to engage directly with human
behavior,
particularly of which participants are less aware or able to
discuss.
Interviews. The most popular form of data collection, individual
in-
terviews use open-ended questions to learn about participants’
experiences,
memories, reflections, and opinions (Magnusson & Marecek,
2015). Differ-
ent types of interviews allow researchers to incorporate varying
degrees of
flexibility as desired by their paradigm, methodology, and style.
There are
four interview types (Rossman and Rallis 2017; adapted from
Patton, 2015):
(a) informal interviews in a casual setting, often recorded
through field
notes; (b) a guided interview guide approach, with preset
categories and
topics but flexibility to address emerging topics; (c) a
standardized open-
ended interview with a set order of fixed questions; and (d) true
conversa-
tions in the form of dialogic interviews. The goal of an
interview is to gain
rich, in-depth, personal experiences that relate directly to the
research topic
(Magnusson & Marecek, 2015).
To conduct an interview, a researcher should have “superb
listening
skills and be skillful at personal interaction, question framing,
and
gentle probing for elaboration” (Marshall & Rossman, 2006).
Guest and
colleagues (2013) recommend using interviews to gain in-depth
insight,
explore new topics, and gain information about potentially
sensitive or
polarizing topics. In approaching interviews, they provide the
following
suggestions:
Schedule interviews at times that are mutually convenient, with
an empha-
sis on the interviewee’s preferences
Allot around 45–90 minutes for an in-depth interview
Pilot the interview protocol prior to implementation to ensure
effectiveness
Plan ahead for what kind of data will be needed during analysis.
This can
include summaries of the conversation, expanded interview
notes, au-
dio/video recordings, and verbatim transcripts.
Although these suggestions provide an initial framework, all
decisions
around interviews are contingent on an understanding of the
participants
and topic under study.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
18 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
Focus Groups. For researchers interested in understanding how
in-
dividuals discuss a topic collectively, focus groups can save
time and money
while gathering rich data. Focus groups tend to be most useful
to gain in-
formation on group norms and processes, opinions and
perspectives, reac-
tions and responses, and brainstorming (Guest et al., 2013).
Because focus
groups allow the researcher to see real-time responses, they
provide bene-
ficial opportunities to view how individuals agree, disagree, or
respond to
one another. A key benefit of focus groups is their assumption
that an indi-
vidual’s attitudes and beliefs do not form in a vacuum;
participants develop
their opinions and understandings by engaging with others
(Marshall &
Rossman, 2006).
The ideal group contains approximately 7–10 individuals that
are ide-
ally strangers, to encourage varying viewpoints (Rossman &
Rallis, 2017).
Utilizing strangers also helps to decrease social desirability bias
that can oc-
cur in interview settings to respond or behave in a certain way.
Depending
on the study, researchers could choose to recruit homogenous or
hetero-
geneous groups of participants (Mertens, 2015). As focus
groups include
multiple moving pieces, they rely greatly on the skill of the
facilitator to
keep the conversation on track, ask appropriate probes, and
ensure a bal-
ance of voices. Interview protocols should establish ground
rules prior to
beginning, prioritize key questions to allow for as much fluidity
in the con-
versation as possible, and create a limited time commitment
(Guest et al.,
2013).
For their study, Sam decides to do individual interviews to
understand
how Students of Color describe microaggressions and their
manifestations
within the context of their overall college experience. Sam
chooses inter-
views because microaggressions can be a sensitive topic for
individuals to
share in a focus group, and there is no clear context in which
Sam could
conduct observations of this behavior. They choose a
standardized open-
ended interview with questions that include
1. In thinking about the past week, can you describe any
microag-
gressions you have encountered and the context in which they
occurred?
2. How would you describe the impact of these
microaggressions on
your overall student experience?
Sam prepares prompts for the interview questions and pilots the
inter-
view protocol with several colleagues who identify as People of
Color before
determining that the interviews will last around an hour each.
Data Analysis
Although there are numerous qualitative data analysis
techniques, they
all share at least three common characteristics. First, the
qualitative data
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 19
analysis process often begins during data collection. Thus, the
analytic
process is considered iterative or non linear (Creswell, 2014). A
researcher
may collect data and engage in early analysis only to realize
that more
data are needed to understand the participants’ experiences.
Even when
formal data analysis does not begin while data collection is
ongoing,
qualitative researchers often use memos to document emerging
ideas and
patterns, which form the basis for subsequent analysis. Initial
data analysis
that occurs during data collection can also allow researchers to
consider
whether they are obtaining the type and quality of information
they
intended. Second, a major goal of qualitative data analysis is
data reduction
(Creswell, 2014). Qualitative research can produce large
amounts of data
and the analytic process works to reduce the volume of
information by
identifying major patterns and themes within it. Researchers can
engage
this process on their own, in teams, and/or using computer-
assisted
qualitative data analysis software (CAQDA) such as NVIVO
(see Bazeley &
Jackson, 2013) or Atlas.ti. Third, the process is immersive,
meaning that it
requires a high level of engagement with the data. This can
include reading
and rereading interview transcripts multiple times to exhaust
exploration
of the data. During this process, researchers often write memos
that help to
document initial interpretations of the data as well as engage in
reflexivity
(e.g., processing how one’s background, biases, and
perspectives may
influence the analytic process) (Lincoln & Guba, 1985). These
memos can
be used as part of one’s audit trail, which is a record of research
steps that
helps to ensure data quality and transparency (Lincoln & Guba,
1985).
One popular analytical tool is the constant comparative method.
Al-
though grounded theorists developed this method, it is
commonly used
as a general tool for analyzing data and is useful for those
learning how
to engage in qualitative analysis because it provides a specific
three-phase
process. This process is known as coding, in which short words
or phrases
are used to “assign a summative, salient, essence-capturing,
and/or evoca-
tive attribute for a portion of language-based or visual data”
(Saldaña, 2013,
p. 3). Codes can reflect activities, relationships, roles,
processes, emotions,
perspectives, and other units of social organization. The
constant compar-
ative method begins with open coding words, lines, several
sentences, or
paragraphs of data. Open coding can be deductive and/or
inductive (Strauss
& Corbin, 1998). Deductive codes stem from borrowed concepts
such as
components of the theoretical framework or key themes from
relevant lit-
erature. Inductive or in vivo codes are emergent from the data.
Induc-
tive coding can be developed from data that “strike as
interesting, poten-
tially relevant, or important to the study . . . for answering the
research ques-
tions” (Merriam, 2009, p. 178). Whether the open codes are
deductive or
inductive, it is important to identify the codes with names and
definitions
clearly (Miles & Huberman, 2005).
The next stage in the constant comparative method is axial
coding,
which is performed iteratively during the open coding process
and also
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
20 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
after open codes are developed. This stage begins the reduction
process and
includes comparing and connecting emerging codes into
categories (Strauss
& Corbin, 1998). Categories are “conceptual elements that
cover or span
many individual examples or codes previously identified”
(Merriam, 2009,
p. 181). For example, while a researcher may have 100 open
codes, the re-
searcher might reduce these codes into 20 categories. One can
do this by
grouping together data by related open codes to reassemble the
data and
demonstrate recurrent patterns and themes (Strauss & Corbin,
1998). The
axial coding process is also useful for separating data that are
essential to the
purpose of the study from data that fall outside the scope of the
research pur-
pose and question(s). The final phase of the constant
comparative approach
is selective coding; however, some researchers will only
perform open and
axial coding, particularly for exploratory studies. During the
selective cod-
ing process, the researcher pulls together themes to develop a
storyline and
identify a core category (Strauss & Corbin, 1998). The core
category “is
the central defining aspect of the phenomenon to which all other
categories
and hypotheses are related or interconnect” (Merriam, 2009, p.
200). For
example, moving from 20 categories to potentially one to five
overarching
themes. This reflects the primary narrative emerging across the
data that
provides a response to the research question(s).
Sam considers the constant comparative approach, but instead
chooses
an analytic approach that stems from narrative inquiry. This
involves four
phases: (a) initial reading of transcripts to indicate general
themes and con-
sider how each part contributes to the whole story; (b) rereading
the tran-
scripts to view whether there are multiple narratives present and
to con-
sider the structure, content, and larger contexts involved; (c)
investigate
the patterns emerging which includes how the whole story and
its parts
are told; and (d) engage the literature/theoretical framework
with the par-
ticipants’ narrative(s) to glean a more in-depth understanding of
the story
(Josselson, 2011).
Research Quality
Although quantitative inquiry strives for reliability and validity,
in qualita-
tive research, trustworthiness is the predominant standard of
research qual-
ity (Guba & Lincoln, 1989; Lincoln & Guba, 1985).
Trustworthiness can be
established in multiple ways. One is by producing work that is
transferable,
or that provides enough context for readers to infer similar
results in their
own context (Krefting, 1999; Lincoln & Guba, 1985). This can
be done
by providing detailed documentation of data collection and
analysis proce-
dures as well as by using thick, rich description of participants’
experiences
(Krefting, 1999; Lincoln & Guba, 1985). One goal of qualitative
research is
credibility or having data that accurately reflects the
phenomenon (Kreft-
ing, 1999; Lincoln & Guba, 1985). Fostering credibility can
begin during
the data collection phase with prolonged engagement with
participants.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 21
Another tool is member checking, which involves testing the
interpretations
of the data with study participants by sharing initial data
analysis for their
feedback (Krefting, 1999; Lincoln & Guba, 1985). Peer
debriefing requires
meeting with an individual who is unaffiliated with the research
(disinter-
ested peers) and can give honest feedback (equal power
dynamic) about
the plausibility of data interpretations. Additionally,
triangulation can be
built into the research design to produce divergent constructions
of reality
(Lincoln & Guba, 1985). For example, one can engage
methodological tri-
angulation through use of multiple forms of data collection
(interviews, par-
ticipant observation) or data triangulation through multiple data
sources.
Triangulation can also be performed through the involvement of
multiple
researchers or analyst triangulation) or during data analysis
through the use
of multiple theoretical frames (theory/perspective triangulation)
(Patton,
2015). Triangulation can establish confirmability to ensure that
findings
are shaped more by study participants than by researcher biases.
Reflexive
processes such as journaling, engaging in dialogue with other
researchers,
and naming one’s positionality (e.g., relationship between
researcher and
participants/study topic) within the write-up of the study can
develop con-
firmability. Lastly, trustworthy studies should be dependable, or
demon-
strate consistent findings that could be repeated (Lincoln &
Guba 1985).
To establish dependability (and confirmability), researchers can
create an
audit trail that documents the steps and processes they engaged
in during
the qualitative investigation.
Sam selects multiple strategies to increase the trustworthiness
of the
study. One is member checking. Sam sends each of the
participants their
transcript with initial interpretations and questions. After giving
the partic-
ipant time to review the transcript and notes, Sam calls each
participant to
briefly ensure that the interpretations reflect the participants’
meaning and
to clarify any questions about the narratives. Another is by
using thick, rich
description by including direct quotes from participants in the
final write-
up of the study. Lastly, Sam engages in peer debriefing with an
institutional
researcher in the office. This individual is not involved in the
study, but is
a Person of Color who graduated from a predominantly white
institution 3
years prior.
Conclusion
Qualitative research provides an important opportunity to
engage with par-
ticipants’ experiences through their own voices and behaviors.
Unlike quan-
titative methodologies, qualitative approaches view the
researcher as the
instrument through which data are collected (Patton, 2015). As
such, in-
tentional engagement throughout each step of the research
process is cru-
cial to ensure a well-aligned, accurate, and ethical design.
Successful use of
qualitative methodologies fosters opportunities for institutional
researchers
to pursue new questions and experiences within their work
(McLaughlin
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
22 USING QUALITATIVE RESEARCH TO PROMOTE
ORGANIZATIONAL INTELLIGENCE
et al., 2001). The rest of the volume continues to look at
specific con-
texts and considerations in which qualitative research can aid
institutional
research.
References
Bazeley, P., & Jackson, K. (2013). Qualitative data analysis
with NVIVO. Thousand Oaks,
CA: Sage.
Charmaz, K. (2014). Constructing grounded theory (2nd ed.).
Thousand Oaks, CA: Sage.
Chase, S. E. (2011). Narrative inquiry: Still a field in the
making. In N. K. Denzin &
Y. S. Lincoln (Eds.), The Sage handbook of qualitative research
(4th ed., pp. 421–434).
Thousand Oaks, CA: Sage.
Corbin, J., & Strauss, A. (2008). The basics of qualitative
research: Techniques and proce-
dures for developing grounded theory. Thousand Oaks, CA:
Sage.
Creswell, J. W. (2014). Research design: Qualitative,
quantitative, and mixed approaches
(4th ed.). Thousand Oaks, CA: Sage.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of
grounded theory: Strategies for
qualitative research. New York, NY: Aldine.
Guba, E. G., & Lincoln, Y. S. (1989). Fourth-generation
evaluation. Newbury Park, CA:
Sage.
Guest, G., Namey, E. M., & Mitchell, M. L. (2013). Collecting
qualitative data: A field
manual for applied research. Thousand Oaks, CA: Sage.
Guido, F. M., Chávez, A. F., & Lincoln, Y. S. (2010).
Underlying paradigms in student
affairs research and practice. Journal of Student Affairs
Research and Practice, 47(1),
1–22. https://doi.org/10.2202/1949-6605.66017
Harper, S. R., & Kuh, G. D. (2007). Myths and misconceptions
about using quali-
tative methods in assessment. New Directions for Institutional
Research, 136, 5–14.
https://doi.org/10.1002/ir.227
Jackson, A., & Mazzei, L. (2013). Plugging one text into
another: Thinking with theory
in qualitative research. Qualitative Inquiry, 19(4), 261–271.
Josselson, R. (2011). Narrative research: Constructing,
deconstructing, and reconstruct-
ing story. In F. J. Wertz, K. Charmaz, L. M. McMullen, R.
Josselson, R. Anderson, & E.
McSpadden (Eds.), Five ways of doing qualitative analysis:
Phenomenological psychol-
ogy, grounded theory, discourse analysis, narrative research,
and intuitive inquiry (pp.
224–242). New York, NY: The Guilford Press.
Krefting, L. (1999). Rigor in qualitative research: The
assessment of trustworthiness.
In A. Miliniki (Ed.), Cases in qualitative research: Research
reports for discussion and
evaluation (pp. 173–181). Los Angeles, CA: Puscale.
Lawrence-Lightfoot, S. (2016). Portraiture methodology:
Blending art and science.
Learning Landscapes, 9(2), 19–27.
LeCompte, M. D., Preissle, J., & Tesch, R. (1993) Ethnography
and qualitative design in
educational research (2nd ed.). San Diego, CA: Academic Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry.
Newbury Park, CA: Sage.
Magnusson, E., & Marecek, J. (2015). Doing interview-based
qualitative research: A
learner’s guide. Cambridge, UK: Cambridge University Press.
Marshall, C., & Rossman, G. B. (2006). Designing qualitative
research (4th ed.). Thou-
sand Oaks, CA: Sage.
McLaughlin, J. S., McLaughlin, G. W., & Muffo, J. A. (2001).
Using qualitative and
quantitative methods for complementary purposes: A case study.
New Directions for
Institutional Research, 112, 15–44. https://doi.org/10.1002/ir.26
Merriam, S. B. (1998). Qualitative research and case study
applications in education. San
Francisco, CA: Jossey-Bass.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
https://doi.org/10.2202/1949-6605.66017
https://doi.org/10.1002/ir.227
https://doi.org/10.1002/ir.26
A QUALITATIVE TOOLKIT FOR INSTITUTIONAL
RESEARCH 23
Merriam, S. B. (2009). Qualitative research: A guide to design
and implementation. San
Francisco, CA: Jossey-Bass.
Mertens, D. M. (2015). Research and evaluation in education
and psychology: Integrating
Diversity with quantitative, qualitative, and mixed methods (4th
ed.). Los Angeles, CA:
Sage.
Miles, M. B., & Huberman, A. M. (2005). Qualitative data
analysis. Thousand Oaks, CA:
Sage.
Morgan, D. L. (2007). Paradigms lost and pragmatism regained:
Methodological impli-
cations of combining qualitative and quantitative methods.
Journal of Mixed Methods
Research, 1(1), 48–76.
https://doi.org/10.1177/2345678906292462
Neuman, W. L. (2006). Social research methods: Qualitative
and quantitative approaches
(6th ed.). Boston, MA: Pearson.
Patton, M. Q. (2015). Qualitative research & evaluation
methods: Integrating theory and
practice (4th ed.). Thousand Oaks, CA: Sage.
Riessman, C. K. (1993). Narrative analysis. Newbury Park, CA:
Sage.
Rossman, G. B., & Rallis, S. F. (2017). An introduction to
qualitative research: Learning in
the field (4th ed.). Los Angeles, CA: Sage.
Saldaña, J. (2013). The coding manual for qualitative
researchers (2nd ed.). Thousand
Oaks, CA: Sage.
Spradley, J. P. (1979). The ethnographic interview. New York,
NY: Holt, Rinehart and
Winston.
Stage, F. K., & Manning, K. (Eds.). (2016). Research in the
college context: Approaches
and methods (2nd ed.). New York, NY: Routledge.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research:
Techniques and procedures
for developing grounded theory. Thousand Oaks, CA: Sage.
Van Note Chism, N., & Banta, T. W. (2007). Enhancing
institutional assessment efforts
through qualitative methods. New Directions for Institutional
Research, 136, 15–28.
https://doi.org/10.1002/ir.228
Yin, R. K. (2003). Case study research: Design and methods
(3rd ed.). Thousand Oaks,
CA: Sage.
CHRYSTAL A. GEORGE MWANGI is an assistant professor of
higher education at
the University of Massachusetts Amherst.
GENIA M. BETTENCOURT is a doctoral candidate in higher
education at the Uni-
versity of Massachusetts Amherst.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH •
DOI: 10.1002/ir
https://doi.org/10.1177/2345678906292462
https://doi.org/10.1002/ir.228
Copyright of New Directions for Institutional Research is the
property of John Wiley & Sons,
Inc. and its content may not be copied or emailed to multiple
sites or posted to a listserv
without the copyright holder's express written permission.
However, users may print,
download, or email articles for individual use.
1 A Qualitative Toolkit for Institutional
ResearchParadigmsCrafting QuestionsOverview of
MethodologiesCase StudyEthnographyGrounded
TheoryNarrative InquiryTools for Data
CollectionObservationsInterviewsFocus GroupsData
AnalysisResearch QualityConclusionReferences
Research in Higher Education, Vol. 36, No. 5, 1995
ASSUMPTIONS UNDERLYING QUANTITATIVE
AND QUALITATIVE RESEARCH:
Implications for Institutional Research
Russel S. Hathaway
. . . . . ~ . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . . .
. . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . o ~ 1 7 6 1 7 6
1 7 6 . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6
. . . . . o . . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . . . . . ~ . . . .
~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . ~ 1 7 6 1 7 6 . . . . .
. . ~ 1 7 6 1 7 6 1 7 6 1 7 6
For institutional researchers, the choice to use a quantitative or
qualitative approach
to research is dictated by time, money, resources, and staff.
Frequently, the choice
to use one or the other approach is made at the method level.
Choices made at this
level generally have rigor, but ignore the underlying
philosophical assumptions struc-
turing beliefs about methodology, knowledge, and reality. When
choosing a method,
institutional researchers also choose what they believe to be
knowledge, reality, and
the correct method to measure both. The purpose of this paper is
to clarify and
explore the assumptions underlying quantitative and qualitative
research. The rea-
son for highlighting the assumptions is to increase the general
level of understanding
and appreciation of epistemological issues in institutional
research. Articulation of
these assumptions should foster greater awareness of the
appropriateness of differ-
ent kinds of knowledge for different purposes.
. . . . . . . . . . . . . . ~ 1 7 6 1 7 6 . . . . . . . . ~ 1 7 6 1 7 6 1 7 6
1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . .
. . . . ~ 1 7 6 1 7 6 1 7 6 . . . . . . . . . . . .
. . . . . . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . . . . o . . , , ~ 1 7
6 1 7 6 1 7 6 1 7 6 . . . . . . . . . . , , ~ 1 7 6 . . . . . . . . . .
There are few subjects that generate as much passion among
scientists as arguments
over method. (Shulman, 1981, p. 5)
Institutional researchers are continually involved with
implementing research
agendas for various campus constituencies. Institutional
research offices pro-
vide important technical and informational support for central
decision makers
in higher education by engaging in research-oriented activities
such as tracking
enrollment patterns, surveying incoming students, documenting
faculty work-
loads, and assessing staff job satisfaction. Research methods
that institutional
researchers employ range from basic quantitative statistical
analyses to inter-
views and case studies (Bohannon, 1988; Bunda, 1991;
Fetterman, 1991;
Hinkle, McLaughlin, and Austin, 1988; Jennings and Young,
1988; Sherman
and Webb, 1988; Tierney, 1991). Some institutional researchers
advocate inte-
grating quantitative and qualitative approaches to institutional
research (Mar-
Russel S. Hathaway, 4216D School of Education Building, The
University of Michigan, Ann
Arbor, MI 48109-1259.
5 3 5
0 3 6 1 - 0 3 6 5 / 9 5 / 1 0 0 0 - 0 5 3 5 5 0 7 . 5 0 / 0 �9 t 9 9
5 H u m a n S c i e n c e s P r e s s . I n c .
536 HATHAWAY
shall, Lincoln, and Austin, 1991; Peterson, 1985a). Often, the
driving forces
behind the choice of methods are time, money, resources, staff,
and those re-
questing the study. The choice to use a quantitative approach
(e.g., survey and
statistical analysis of responses) versus a qualitative approach
(e.g., transcrip-
tion analysis of interviews) is generally decided at the level of
methods. Al-
though the choice of methods is often a difficult one,
institutional researchers
generally make the decision with relative ease, choosing the
method that will
garner the information they seek. However, they often make
their decisions
without giving much thought to the assumptions underlying
research methods.
Over the past decade, educational researchers have been
engaged in an ongo-
ing polemic concerning quantitative and qualitative research.
They have been
arguing over philosophical commensurability,' the concern that
qualitative re-
search has been seen as a methodological variation of
quantitative research, and
whether researchers should combine quantitative and qualitative
research meth-
ods when pursuing research interests (Donmoyer, 1985; Eisner,
1981, 1983;
Firestone, 1987; Howe, 1985, 1988; Shulman, 1981).
Although the intricate details of this debate are not of
paramount concern for
institutional researchers, the general discourse over the
fundamental philosophi-
cal grounds guiding research methods is relevant. Some of those
involved in the
debate argue that the choice to use a quantitative or qualitative
research ap-
proach should not be made at the method level (Guba and
Lincoln, 1981).'- This
concern has direct relevance for those making methodological
choices in an
applied field such as institutional research. The decision to use
quantitative or
qualitative methods is replete with assumptions concerning the
nature of knowl-
edge and reality, how one understands knowledge and reality,
and the process
of acquiring knowledge and knowledge about reality. When one
chooses a par-
ticular research approach, one makes certain assumptions
concerning knowl-
edge, reality, and the researcher's role. These assumptions shape
the research
endeavor, from the methodology employed to the type of
questions asked.
When institutional researchers make the choice between
quantitative or quali-
tative research methods, they tacitly assume a structure of
knowledge, an under-
standing and perception of reality, and a researcher's role. The
purpose of this
paper is to clarify and explore the underlying assumptions
contained within
quantitative and qualitative research. It is important for
institutional research-
ers to understand the philosophical grounding of the two
approaches so that
they may reflect on those assumptions while engaging in
institutional research.
In addition, understanding the philosophical grounding also
highlights the
strengths and weaknesses of both approaches. The reason for
contrasting the
two paradigms is to increase the general level of understanding
and apprecia-
tion of epistemological issues in the institutional research
profession. Articula-
tion of the epistemological differences should foster greater
awareness of the
appropriateness of different kinds of knowledge for different
purposes; it may
INSTITUTIONAL RESEARCH 537
thereby help legitimate the adoption of alternative and more
appropriate knowl-
edge-yielding paradigms in institutional research. It should also
help reduce
conflicts within the field by justifying and providing a basis for
tolerance of
diversity and multiplicity in research design.
Greater epistemological appreciation seems to be an essential
prerequisite to
developing an appropriate inquiry approach whereby researchers
would explic-
itly select a mode of inquiry to fit the nature of the problematic
situation under
study, the state of knowledge, and their own skills, style, and
purpose (Don-
moyer, 1985; Smith, 1983a). In addition, appreciation of
epistemological issues
has implications for the evaluation of institutional research
products. It leads to
a belief that the quality of a piece of research is more critically
indicated by the
appropriateness of the paradigm selected than by the mere
technical correctness
of the methods used (Donmoyer, 1985; Eisner, 1981; Herriott
and Firestone,
1983; Smith 1983a).
The debate that has been going on among educational
researchers will be
highlighted in brief, emphasizing the major points that have
been raised by
proponents of quantitative and qualitative research, as well as
the arguments for
those who advocate the combination of the two approaches. This
debate will be
used as a stepping stone into a discussion of the underlying
philosophies of
quantitative and qualitative research. An example program
review will be used
to describe how the two approaches might structure the program
investigation
and evaluation. This example is not meant to represent an ideal,
or even typical,
method of conducting a review, but rather to provide a vivid
sense of the dis-
tinctions between the two approaches. Finally, differences
between the para-
digms will be identified and discussed followed by a conclusion
highlighting
implications for conducting inquiry in institutional research.
DEBATES OVER DISCIPLINED INQUIRY
Early Debate
The educational research community is engaged in a heated
debate over
quantitative and qualitative approaches to disciplined inquiry.
The crux of the
debate centers on the incommensurability of the underlying
assumptions struc-
turing the approaches (Donmoyer, 1985; Eisner, 1981;
Firestone, 1987; Howe,
1985). 3 This debate in educational research, however, followed
a crisis in the
social sciences concerning identical philosophical issues
(Bemstein, 1976,
1983; Rabinow and Sullivan, 1987). Bernstein (1976) has
provided one of the
most comprehensive summaries of the history of the social
science debates, as
well as a rich description of the various research paradigms that
were, and still
are, being discussed.
The debate over quantitative and qualitative research arose out
of the social
and political unrest of the 1960s during which the foundations
of the social
538 HATHAWAY
disciplines came under radical criticism (Bernstein, 1976;
Rabinow and Sul-
livan, 1987). Bernstein argues that these critiques came at a
time when the
social disciplines had arrived at a tentative agreement on an
empirical founda-
tion where they could begin a stable expansion of the scientific
knowledge of
society. Critics argued, and continue to argue, that the
foundations of the social
sciences were replete with weakness; that what was believed to
be objective
scientific knowledge was a veiled form of ideology that
supported the status
quo (Bernstein, 1976; Gordon, Miller, and Rollock, 1990;
Stanfield, 1985).
Others argued that the social sciences did not provide the
critical perspectives
on what was happening in society, nor did they provide
solutions to the prob-
lems they set out to solve (Bernstein, 1976). The belief that a
rational, system-
atic study of societal problems would result in policies and
programs that would
address them was doubted (Bernstein, 1976).
As the social sciences began to experience profound criticism
and self-doubt,
newly discussed approaches arose to rescue social science
research from the
depths of angst. Bernstein (1976) argues that linguistic
philosophical inquiries
were used to challenge the epistemological foundations of the
social sciences.
Phenomenology and hermeneutics also became more welcome in
social scien-
tific circles. These disciplines, often characterized as soft and
subjective by
empirical researchers, were perceived as panaceas for the ills
facing social re-
search (Bernstein, 1976). Advocates of phenomenology and
hermeneutics be-
lieved that these approaches could provide elucidative insight
into social pro-
cesses that was not being acquired with empirical inquiry
methods (Bernstein,
1976).
The literature produced in this period concerning the nature of
research can
best be described as muddled. Bernstein (1976) reports that
there was no agree-
ment during the 1960s and 1970s about what were provable
results, what were
the proper research methods, what were important problems to
address, or what
were "the most promising theoretical approaches" in the study
of social science.
It was during this confusing period that the educational
community began ques-
tioning its approaches to disciplined inquiry?
Educational Debate
Closely following what could be called the "angst" period in
social science
research, the educational research community began to
experience a similar
debate, beginning in the late 1970s and continuing today
(Donmoyer, 1985;
Eisner, 1981, 1983; Firestone, 1987; Garrison, 1986; Giarelli
and Chambliss,
1988; Howe, 1985, 1988; Hutchinson, 1988; Lincoln and Guba,
1985; Mar-
shall, Lincoln, and Austin, 1991; Sherman and Webb, 1988).
Throughout thi,
period, educational researchers have engaged in a heated debate
over the degret
to which quantitative and qualitative methods can be combined.
This discours,
INSTITUTIONAL RESEARCH 539
revolved around defining different facets of qualitative and
quantitative re-
search along with the debate focusing on the pros and cons o f
combining the
two approaches. In general, researchers fall into three
perspectives in this dis-
cussion: the purists, the situationalists, and the pragmatists
(Rossman and
Wilson, 1985). The purists would not entertain the notion o f
discussing combin-
ing the two approaches. Educational researchers within this
perspective focus
on the incommensurability between the two approaches and
argue that the phi-
losophies grounding the two approaches are so divergent in
terms of assump-
tions about the world, truth, and reality that one should not even
consider com-
bining quantitative and qualitative research (Guba, 1987; Smith,
1983a, 1983b;
Smith and Heshusius, 1986). The concern is that by combining
approaches,
researchers neglect to acknowledge that the different
approaches make vastly
different assumptions concerning knowledge and reality. Others
have discussed
the problems involved in ignoring the issue of underlying
assumptions and fo-
cusing only on the benefits of combining both approaches
(Donmoyer, 1985).
In contrast, those falling within the situationalist perspective
focus on the
level of method and argue "that certain methods are most
appropriate for spe-
cific situations" (Rossman and Wilson, 1985, p. 630). The
choice o f method for
the situationalist is partially determined by the questions to be
answered. Fur-
thermore, situationalist researchers also alternate between
qualitative and quan-
titative methods as they engage the research process (Rossman
and Wilson,
1985). In other words, researchers adhering to this perspective
may use a sur-
vey to generate information that could assist in the development
o f an interview
protocol.
For the pragmatist, quantitative and qualitative methods are
viewed as capa-
ble of informing one another throughout the research process. In
contrast to the
situationalist, who alters between the two approaches, the
pragmatist views the
two approaches capable of simultaneously bringing to bear both
o f their
strengths to answer a research question. Using interviews,
surveys, question-
naires, and observation techniques within one study is as an
example o f a prag-
matic approach to integrating or combining research methods.
Institutional Research Debate
Throughout the past decade of discussion, the educational
research debate
has highlighted many of the strengths and weaknesses of
quantitative and quali-
tative research, and it has brought to light the philosophies
underlying the two
approaches. For institutional researchers, the debate follows
closely on the heels
of the evolution of the profession, a profession that has slowly
moved from
engaging primarily in descriptive quantitative studies in the
1970s and 1980s
(Peterson, 1985a, 1985b) to more multimethod studies in the
1990s (Peterson
and Spencer, 1993). Institutional researchers have engaged in
debates similar to
540 HATHAWAY
those of educational researchers, but not to the same extent.
Institutional re-
searchers have primarily discussed quantitative and qualitative
differences at
the method level (Bohannon, 1988; Fetterman, 1991; Fincher,
1985; Hinkle,
McLaughlin, and Austin, 1988, Jennings and Young, 1988;
Marshall, Lincoln,
and Austin, 1991; Tierney, 1991) and how different
methodologies yield differ-
ent information (Peterson and Spencer, 1993). At this level,
institutional re-
searchers have been attending, correctly so, to assumptions
supporting specific
statistical procedures (Bohannon, 1988; Yancey, 1988a, 1988b),
such as having
random selection when performing multiple regression, having a
sample size
larger than five in each cell of an ANOVA, or assuming a
normal distribution.
By attending to the assumptions at this level, institutional
researchers have pro-
duced studies that have rigorous application of methods.
Institutional researchers advocating qualitative approaches have
also appro-
priately focused attention on the assumptions guiding good
qualitative research.
Qualitative institutional researchers have tended to address data
collection pro-
cedures, such as having an interview protocol composed of
nonleading ques-
tions and accurate transcripts of observation or taped accounts
(e.g., Miles and
Huberman, 1984), but they fail to discuss or acknowledge the
scientific philoso-
phies (i.e., phenomenology, hermeneutics, positivism) in which
they are
grounded. Those using quantitative approaches also neglect to
mention the
philosophical grounds on which their approaches are based. It is
important for
institutional researchers to be cognizant of the philosophical
assumptions guid-
ing both quantitative and qualitative approaches. It is important
not only be-
cause it is good practice, but because institutional research is an
applied field in
which much o f what is done is used for policy decisions.
These policy deci-
sions, once implemented, make assumptions concerning the
reality of campus
life. These realities are defined, in part, by the underlying
philosophies structur-
ing the approach used by the institutional researcher. For
example, a statistical
analysis performed on survey results may describe certain
aspects of the
campus, but these aspects have been shaped by the people who
developed the
survey and may not reflect the reality as understood and
experienced by those
who answered the survey.
PARADIGMS UNDERLYING QUANTITATIVE AND
QUALITATIVE
RESEARCH
Defining Paradigm
A framework is needed to discuss the differences between
philosophies un-
derlying the quantitative and qualitative research approaches.
The distinction
between the philosophies can be made by using the concept of
paradigm. The
work of a number of philosophers (Bernstein, 1976; Firestone,
1987; Gubrium,
1988; Kuhn, 1962, 1970, 1974) is quite useful in defining the
idea of a para-
INSTITUTIONAL RESEARCH 541
digm. Kuhn defines scientific paradigm as a theoretical
framework, or a way of
perceiving and understanding the world, that a group of
scientists has adopted
as their worldview. For Bernstein, the underlying paradigm
dictates a level of
generally unexamined common assumptions, attitudes, and
expectations, and a
framework within which inquiry operates. This paradigm guides
a shared sense
of what scientific inquiry is and could be the kind of reality
being investigated,
the proper form of inquiry, and a general orientation for
perceiving and inter-
preting the world (Bernstein, 1976). The paradigm from which
one operates has
consequences for views of the nature of empirical knowledge,
the relations of
theory and practice, the relations of fact and perception, and the
education and
role of the theorist (Bemstein, 1976). When a scientist observes
a phenomenon
and interprets what this observation means, that scientist is
using a particular
paradigm to give that observation meaning. Gubrium (1988)
defines paradigm
as a way of structuring everyday experience, a way of framing
events, a sense
of what is real and how to prove it, and an implicit stance on
ontology and
epistemology (i.e., being and knowing). This paradigm also
influences the
methods chosen (Firestone, 1987) and implies policy and action
(Banks, 1988).
In essence, scientific paradigms act as lenses through which
scientists or
researchers are able to perceive and understand the problems in
their field and
the scientific answers to those problems. Paradigms dictate what
researchers
consider data, what their role in the investigation will be, what
they consider
knowledge, how they view reality, and how they can access that
reality. A
scientific paradigm provides a group of scientists or researchers
with a way of
collectively making sense of their scientific world. It is this
framework of ev-
eryday assumptions about knowledge, reality, and the proper
methodology that
will be summarized in the remainder of this paper.
Distinguishing the Paradigms
Quantitative and qualitative approaches can both serve research
purposes, but
in different ways and with different effects. The ways in which
they are used
and the insights provided are a function of the underlying
assumptions of the
paradigms grounding the approaches. The attempt here is to
systematically ar-
ticulate the paradigms underlying both these approaches by
direct description
and by contrast with a recognizable alternative. The two
paradigms will be
contrasted on a number of dimensions (summarized in Table 1).
Before discussing the differences between the two paradigms of
inquiry, it
may be useful to comment on the paradigms underlying the
distinction between
quantitative and qualitative research approaches. A review of
the literature
yields a wide array of distinctions between the philosophies
undergirding the
two approaches?
Included among these are Geertz's (1973) distinction between
thin and thick
542 HATHAWAY
TABLE 1. The Paradigm Framework Underlying the Two
Approaches
Empirical-Analytic Interpretive
(quantitative) (qualitative)
Methodology Begin with hypothesis of a rela- Formulate a
question.
tionship between cause & effect Identify sample (purposive).
(program & outcome). "Fix" phenomenon (interview,
Test the hypothesis, observe, tape record).
Develop instruments. Narrative articulation and inter-
Identify sample (random). pretation of themes.
Measure/code phenomenon. Compare data types; integrate
Aggregate data. material (as parts of a whole).
Generalize: theory that has with- Hypothesis generating.
stood this test. Write case descriptions.
Researcher should have objective Generalize?
stance. Researcher should have participa-
Ontology
(Reality)
Epistemology
(Knowledge)
Public events (e.g., discussion,
utterances, etc.) reflect a reality.
Private perceptions (e.g., beliefs,
perceptions, etc.).
Subjects and objects exist sepa-
rate from the perception of them.
Objective events, subjective
states.
Governed by laws.
Knowledge = objective reports
of measured dimensions of the
phenomenon.
Compared against (tacit) norms
("skills").
Compared over time.
Differences/no differences attrib-
uted to hypothesized causal rela-
tionship, to lack of validity of
instruments, or to alternative
causes.
General statements of regularities
among objective properties that
are internally consistent and that
correspond to the way things
really are.
tory stance.
Public events.
People have different aims, atti-
tudes.
People interact within, and can
change, a local setting.
Subjects and objects located in
intersubjective communities.
People construct and act within a
context which structures and con-
strains that activity.
Knowledge = understanding of
participants' aims, perspectives,
assumptions: the terms in which
their everyday life is grasped.
Plus articulation of the local so-
cial context of interaction.
Description of specific cases
(persons & communities): people
employ interpretive schemes that
must be understood before action
can be described.
Character of the local context
must be articulated.
INSTITUTIONAL RESEARCH 543
description; Hall's (1976) low context and high context; Pike's
(1967) etic and
emic; Kaplan's (1964) logic-in-use and reconstructed logic;
Smith's (1983a)
realist and idealist continuum; Smith and Heshusius's (1986)
rationalist and
naturalist distinction; Habermas's (1988) and Bemstein's (1976)
empirical-ana-
lytic and interpretive distinction; and the distinctions between
acquaintance
with and knowledge about as variously construed by James
(1918), Dewey
(1933), Schutz (1967), and Merton (1972). For the purposes of
this paper, the
term empirical-analytic will be used to describe the paradigm
structuring quan-
titative research, and the term interpretive will be used to
describe the paradigm
underlying qualitative research.
Most commonly, the empirical-analytic paradigm has been
associated with
positivism. There are many varieties of positivism (Phillips,
1983). Comptean
positivism holds that the scientific method can be applied to
human experiences
(Phillips, 1983). Thus, researchers within the Comptean
tradition focus upon
observable, objectively determinable phenomena. In contrast,
logical positivism
is marked by an intense dislike of metaphysics and aims to
remove the idea
from both the natural and the human sciences (Phillips, 1983).
Logical positi-
vists also believe in the verifiability principle of meaning,
which states that
something is meaningful if and only if it is verifiable
empirically--directly by
observation through the senses. From the verifiability concept
arose the idea of
operational definitions (Phillips, 1983). Other terms used to
describe this para-
digm include hypothetico-deductive and objectivist (Moss,
1990).
The description of the interpretive inquiry paradigm has a wider
range of
descriptors. Interpretive inquiry can be described as
phenomenological, her-
meneutical, experiential, and dialectic. Naturalistic, inductive,
and relativist are
some of the other terms used to describe the interpretive
paradigm (Moss,
1990). Each of these terms is difficult to describe in brief or
with precision. It
needs to be made clear that although many of these terms can be
identified as
interpretive research, caution is required from equating it with
any one of them.
This is due to the slight variation in assumptions concerning
different interpre-
tive approaches. ~ As will be seen later, all the interpretive
research traditions,
however, generally share common assumptions about
methodology, ontology,
and epistemology. 7
Methodological and Ontological Differences
The description of the paradigms begins by comparing the
researcher's role
and relationship to the setting under the two paradigms, and by
identifying the
epistemological and validity assumptions underlying the choice
of role and rela-
tionship. ~ Knowledge and understanding of a college or
university situation can
be acquired in two ways: (1) by studying, "objectively," data
generated by the
situation, and (2) by becoming part of the situation by
understanding participant
544 HATHAWAY
views of it. We can come to "know" the chemistry and
psychology departments
by examining faculty research productivity, enrollment
statistics, questionnaire
results, or GRE subject tests; or, alternatively, by functioning
within these de-
partments for a period o f time talking with faculty, students,
and staff.
Empirical-analytic inquiry is characterized by the researcher's
detachment
from the organizational setting under study (Eisner, 1981;
Phillips, 1983; Smith,
1983a, 1983b). The detachment derives, in part, from the
assumption that the
object under study is separate from, unrelated to, independent
of, and un-
affected by the researcher (Eisner, 1981; Smith, 1983a, 1983b).
The mind is
separate from reality (Smith, 1983a, 1983b) and truth is defined
as a corre-
spondence between our words and that independently existing
reality (Smith
and Heshusius, 1986). In other words, "there are social facts
with an objective
reality apart from the beliefs of individuals" (Firestone, 1987,
p. 16). Physics
provides an ideal example. The objects of interest are measured
with instru-
ments, the data are analyzed to determine if logical pattems
seem to exist, and
rational theories are constructed to integrate, explain, and
perhaps predict a
multitude o f facts. Underlying the detachment of the
researcher inquiring from
an empirical-analytic perspective are critical ontological
assumptions: the re-
searcher is guided by belief in an external reality constituted of
facts that are
structured in a law-like manner (Firestone, 1987). In essence,
researchers con-
ducting inquiries within this paradigm are hoping to document
laws that struc-
ture reality.
In contrast, inquiry for the interpretive paradigm carries with it
the assump-
tions that the researcher can best come to know the reality of a
situation by
being there: by becoming immersed in the stream of events and
activities, by
becoming part o f the phenomenon o f study, and by
documenting the under-
standing o f the situation by those engaged in it (Firestone,
1987; Herriot and
Firestone, 1983; Howe, 1985; Jacob, 1988; Smith, 1984). Jacob
(1988) states,
"Qualitative research has been characterized as emphasizing the
importance o f
conducting research in a natural setting, as assuming the
importance o f under-
standing participants' perspectives, and as assuming that it is
important for
researchers subjectively and empathetically to know the
perspectives of the par-
ticipants" (p. 16). Knowledge is validated experientially
(Firestone, 1987; Her-
riot and Firestone, 1983; Howe, 1985; Jacob, 1988; Smith,
1984). Underlying
the interpretive paradigm is a very different set of
epistemological assumptions
from those o f the empirical-analytic paradigm. Fundamental to
the interpretive
paradigm is the belief that knowledge comes from human
experience, which is
inherently continuous and nonlogical, and which may be
symbolically repre-
sentable (Firestone, 1987; Herriot and Firestone, 1983; Howe,
1985; Jacob,
1988; Smith, 1984). Reality is constructed by those
participating in it, and un-
derstanding the reality experienced by the participants guides
the interpretive
researcher. Truth is "a matter o f socially and historically
conditioned agree-
INSTITUTIONAL RESEARCH 545
ment" (Smith and Heshusius, 1986, p. 6). An interpretive
researcher would not
look for the laws governing reality because, ultimately, reality
is constructed
and understood differently for each individual (Taylor, 1987).
For example, an
interpretive researcher could not describe the "objective"
culture of an aca-
demic department, but could describe the culture as seen by
those participating
in it. Some would argue that one can never understand the
reality of others, but
only be able to articulate one interpretation of it (Cziko, 1989;
Dilthey, 1990;
Kent, 1991).
The researcher's role in empirical-analytic inquiry can be best
described as
that of onlooker. Since researchers operating from an empirical-
analytic para-
digm adhere to the concept of a mind-reality duality,
researchers simply need to
look around in the environment to document objective reality.
Quantitative re-
searchers are detached to avoid personal bias infringing on the
description of
reality (Firestone, 1987). Empirical-analytic research
presupposes an indepen-
dent reality and then investigates how we are a part of that
reality and how we
can know that reality (Firestone, 1987; Smith 1983a, 1983b;
Smith and
Heshusius, 1986). Subsequently, the researcher is a detached
observer, looking
at reality and attempting to understand its complexities and
relationship to those
doing the observation. The researcher may use a telescope,
microscope, survey,
or assessment instrument when viewing a selected piece of the
world; such use
allows the researcher to remain detached, an essential feature of
empirical-
analytic inquiry. What the researcher sees (i.e., data, coded
interviews) are
taken prima facie as indicators of "reality."
For interpretive inquiry, the researcher becomes an actor in real
situations?
The researcher must attend to the total situation and integrate
information from
all directions simultaneously--interviews, observations, and
collected cultural
artifacts (Denzin, 1971; Herriot and Firestone, 1983; Howe,
1988; Smith, 1984;
Taylor, 1987). The relevant world is the field surrounding the
individual actor/
researcher (Denzin, 1971; Herriot and Firestone, 1983; Howe,
1988; Smith,
1984). Researchers engage what is being researched to
understand what is tak-
ing place. They identify what they know about what they are
studying to eluci-
date the understanding they are bringing to the situation (see
e.g., McCracken,
1988; Rabinow and Sullivan, 1987). "It is by drawing on their
understanding of
how they themselves see and experience the world that they can
supplement
and interpret the data they generate" (McCracken, 1988, p. 12).
The re-
searcher's knowledge can be used as a guide, directing the
researcher to possi-
bilities and insights into that which is being researched
(McCracken, 1988). For
this reason, universal law and generalizability is limited
because reality is a
constructed concept and a researcher's interpretation is also a
constructed part
of the reality observed. Reality for those being studied is
different for everyone
in the researcher's field of vision.
Another difference between the two paradigms is the source of
the analytical
546 HATHAWAY
categories around which data are organized. In a typical piece
of empirical-
analytic research, the investigator uses a theoretical framework
from which to
preselect a set of categories that will guide the inquiry
(Firestone, 1987; Howe,
1985; Smith, 1983a, 1983b; Smith and Heshusius, 1986). The
goal is to isolate
and define categories precisely before the study is undertaken,
and then to de-
termine the relationships between them (Firestone, 1987; Howe,
1985; Mc-
Cracken, 1988; Smith, 1983a, 1983b; Smith and Heshusius,
1986). Hypotheses
are phrased in terms of these categories, and only those data
pertaining to them
are collected (Howe, 1985; McCracken, 1988). The life of a
college or univer-
sity microenvironment (i.e., academic department, student
affairs office) is
viewed through the lens of a limited number of categories. For
example, when
investigating the supervisory style of student affairs middle
managers, an insti-
tutional researcher could apply categories of human
development to see if these
managers engage in situational supervision. At the extreme,
some might argue
that the reality being viewed is being actively structured by the
categories em-
ployed by the researchers to investigate the phenomenon of
interest.
Empirical-analytic researchers may derive their a priori
categories from per-
sonal beliefs or experience, from theoretical formulation, or
from their own or
others' interpretive research (Heyl, 1975; McCracken, 1988). In
the case of
interpretive inquiry, there are, generally, no intentionally
prescribed categories
to constrain the researcher (Denzin, 1971; Eisner, 1981; Howe,
1988; Shulman,
1981; Smith, 1983a, 1983b). Instead, the interpretive researcher
attempts to
identify emergent themes within an understanding of the
respondent's view-
point of the context (Denzin, 1971; Eisner, 1981; Shulman,
1981; Smith, 1983).
Features are noticed and identified through an interpretive,
iterative process
whereby data and categories emerge simultaneously with
successive experience
(McCracken, 1988). The process represents an experiential
exploration and is
particularly suited to early inquiry into new research territory
(Denzin, 1971;
Firestone, 1987; Smith and Heshusius, 1986). Interpretive
inquiry is useful for
generating tentative categories grounded in the concrete
circumstance of a par-
ticular situation. Such emergent categories may subsequently be
used as the a
priori categories guiding the more deductive, hypothesis-testing
empirical-ana-
lytic approach.
A caveat must be noted to the process just described. Some may
argue that
the idea of viewing a situation or phenomenon for "emergent"
themes is unat-
tainable. Phenomenologists and hermeneuticists might disagree
with the de-
scription just provided. For them, a situation or occurrence
cannot be compre-
hended without one's own knowledge about the situation.
Everyone has some
idea of the phenomenon at which they are looking and these
ideas shape what is
being seen. In other words, the "emergent" themes that are
being observed may
be seen because of the particular knowledge or ideas possessed
by the re-
searcher before the start of the research. Phenomenologists
believe that the
INSTITUTIONAL RESEARCH 547
researcher's preknowledge can be identified and bracketed out
when viewing a
phenomenon (McCracken, 1988). In contrast, a hermeneuticist
would disagree
and argue that one can never remove one's own preknowledge
from the investi-
gation (Kvale, 1983; Packer and Addison, 1989). One cannot
understand the
situation without preknowledge, because preknowledge assists
in understanding
what is being seen.
A further difference is the aim of inquiry. The aim of inquiry
for the empiri-
cal-analytic paradigm is to generalize from the particular to
construct a set of
theoretical statements that are universally applicable
(Donmoyer, 1985; Fire-
stone, 1987; Garrison, 1986; Howe, 1988; McCracken, 1988;
Smith, 1983a,
1983b; Smith and Heshusius, 1986). The institutional research
done in the em-
pirical-analytic paradigm aims to develop understanding of
classes of higher
education phenomena, rather than to focus on particular
instances in particular
settings. Interpretive inquiry, however, is directed toward the
unique situation or
what Lewin (1951) calls a focus on the whole and the
individual's present
situation. The aim of interpretive inquiry is to describe in detail
a specific
situation or phenomenon under study. The situationally relevant
products of
qualitative inquiry serve both practical and theoretical purposes
(Jacob, 1988;
McCracken, 1988). They can provide guides for action in the
immediate situa-
tion and ideas for developing hypotheses to guide quantitative
inquiry (Miles
and Huberman, 1984). 'o
Epistemological Differences
The different paradigms are also associated with different types
of knowl-
edge. The aim of situation relevancy pursued in interpretive
research is served
by knowledge of the particular phenomenon (i.e., college or
university, aca-
demic department, etc.) under study (McCracken, 1988;
Mishler, 1986). The
aim of generalizability sought by empirical-analytic research is
served by the
development of universal knowledge. Interpretive inquiry
focuses on the partic-
ular: the knowledge that is infused with human organization and
human inter-
est, as represented by the situation under study (Bernstein,
1976, 1983; Mc-
Cracken, 1988; Mishler, 1986). For the interpretive paradigm,
knowledge is
knowledge only as understood within the social context in
which it takes place
(Guba, 1987; Guba and Lincoln, 1981; McCracken, 1988;
Mishler, 1986;
Smith, 1983a, 1983b; Smith and Heshusius, 1986). The meaning
of a particular
utterance or interaction can be understood and has meaning only
within the
specific context in which it occurred (McCracken, 1988;
Mishler, 1986). In the
extreme, generalizability within the empirical-analytic inquiry
implies a disso-
ciation of universal knowledge from human interest (Habermas,
1971). And, at
the other extreme, qualitative inquiry implies a preoccupation
with the idio-
syncratic."
548 HATHAWAY
Knowledge for both paradigms is further differentiated by what
researchers
consider to be data and the level at which they consider issues
of meaning. In
interpretive inquiry, the aim of understanding a particular
situation requires that
researchers make direct experiential contact with the
phenomena under study
(e.g., classroom, academic department, etc.). Understanding the
events, activ-
ities, and utterances in a specific situation requires a complex
appreciation of
the overall context in which the phenomenon occurs
(McCracken, 1988; Mish-
ler, 1986). Context refers to the complete fabric of local
culture, people, re-
sources, purposes, earlier events, and future expectations that
constitute time-
and-space background of the immediate and particular situation
(Denzin, 1971;
Guba, 1987; Guba and Lincoln, 1981; McCracken, i988;
Mishler, 1986; Smith,
1984). Facts have no meaning in isolation from the setting
(Herriott and Fire-
stone, 1983; McCracken, 1988; Mishler, 1986). Meaning is
developed from the
point of view of the participant (Firestone, 1987; McCracken,
1988; Mishler,
1986; Smith, 1983a, 1983b). Interpretive research yields
knowledge that is con-
nected to the participant's definition or perspective o f the
situation, what
Rogers (1951) has termed the "phenomenal field" of the person.
Researchers
involve themselves directly in the setting under study in order
to appreciate
organizational phenomena in light of the context in which they
occur and from
the participants' point of view.
In empirical-analytic inquiry, the aim of developing universal
principles of
institutional life necessitates stripping away the idiosyncrasies
o f the particular
phenomenon studied to reveal what is generally applicable to all
similar situa-
tions (Firestone, 1987; Garrison, 1986; Howe, 1985; Smith,
1983a, 1983b;
Smith and Heshusius, 1986; Soltis, 1984). The separation of the
universal from
the particular is accomplished through several processes. With
the aid of sam-
pling, aggregation, and other analytic techniques, the
uniqueness of individual
academic departments or classrooms is randomized, controlled
for, and other-
wise "averaged," revealing the core of presumed common truths.
The validity
of such efforts relies on the comparability of measurements
across observations,
settings, and times, as well as the completeness with which the
observational
procedures and situations are documented. Hence, the concern
with instrumen-
tation, specification, precision, and adherence to
methodological assumptions
(i.e., sampling is random, variables are normally distributed).
Empirical-analytic research is designed to be detached from,
and independent
of, a specific situation under study in a particular organization,
academic de-
partment, or classroom. The researcher determines the
frequencies of, and asso-
ciations among, events with respect to a set of hypothesized
categories and
relationships (Firestone, 1987; Garrison, 1986; Howe, 1985;
Smith, 1982a,
1983b; Smith and Heshusius, 1986; Soltis, 1984). Meaning is
assigned to
events on the basis of a priori analytic categories and explicit
researcher-free
procedures. The spectrum of a phenomenon is filtered through
the researcher's
INSTITUTIONAL RESEARCH 549
preset categories; elements related to the categories are
selected, coded as data,
and simultaneously given meaning by the categories (Firestone,
1987; Garrison,
1986; Howe, 1985; Smith, 1983a, 1983b; Smith and Heshusius,
1986; Soltis,
1984). As a result, data are considered factual when they have
the same mean-
ing across situations and settings. That is, they are context-free.
AN ACADEMIC PROGRAM REVIEW EXAMPLE
To illustrate how the underlying philosophical grounds of the
two paradigms
shape an approach to an institutional research question, a
hypothetical example
of an academic program review is presented. The example
developed focuses
on an English department's interest in whether it has
successfully implemented
its new focus on critical thinking skills and what impact the
focus has on var-
ious outcomes of interest, including faculty workload. The
assumption behind
the design is the belief that interactive and collaborative class
discussion will
facilitate critical thinking skills more so than the normal faculty
lecture format.
By engaging with their classmates over course-assigned texts,
the department
hopes that the students will reflect on their own perspectives
and interpreta-
tions, but also be challenged to better articulate what they
believe the texts to
be saying. In addition, the department hopes that the increase in
skills o f articu-
lation will translate into better writing and better academic
performance in other
writing-based classes as well as better job placement upon
graduation.
Table 2 highlights some of the major differences between an
empirical-ana-
lytic and an interpretive approach to this study. Comparing the
aims o f the
study, one notices that empirical-analytic institutional
researchers would look to
document the implementation of classroom discussions. They
would hypothes-
ize prior to the study what they think would occur, for example,
what "types"
of interactions they would see. In this case, they would
hypothesize that discus-
sions would facilitate the writing skills of those students
participating in the
study and compare them to a group of students in a control
group who were
exposed to the traditional lecture format. They would want to
describe changes
that occur, whether they be the presence or absence o f what
they expected to
occur. In contrast, interpretive researchers' intention would be
to explain the
content and the processes of the discussions occurring in the
classroom with the
discussion intervention. They want to document the
understanding of the inter-
vention from the participants' viewpoints and explicate any
unpredicted and
emergent themes.
The aims of the study are structured by the underlying
assumptions guiding
the paradigms. On the interpretive side, the assumption that
reality is con-
structed directs the researchers to attempt to document how the
participants
understand and experience the critical thinking focus o f the
department and
how faculty view the impact on their workload. In contrast, the
empirical-ana-
550 HATHAWAY
TABLE 2. Hypothetical Academic Program Review
Empirical-Analytic Interpretive
(quantitative) (qualitative)
Stated aims of
the study
Design details
Material
obtained
Form of the
analysis
Findings
To document implementation of
critical thinking component.
To chronicle (presence or ab-
sence of predicted) changes
Summative evaluation (decision
making and accountability).
Observation of classes for 3
months.
Interviews with 6 faculty.
Surveys of faculty, students, and
graduates.
3 focus group discussions.
Observation notes of conversa-
tion "gist."
Coded for "indicators" of goal
attainment (a priori categories).
Multiple-choice survey questions
(a priori categories).
Course grades.
Statistical assessment of change
over time in coded observations,
collapsed over interview and
group discussions.
Statistical comparison of survey
responses.
Interviews taken at face value as
statement of beliefs, attitudes,
perceptions.
Documentation of faculty work-
load hours.
No observed differences among
faculty, staff, and students.
No significant difference in
course grades between critical
thinking and control groups.
Explanation of participant and
processes of discussion.
Document understanding of
goals from participant perspec-
tives.
To articulate unpredicted, emer-
gent themes.
Formative evaluation (program
improvement).
Observation, interviews, field
notes in 3 classes--selected to
contrast for 12 months.
Interviews with 6 faculty, 6 stu-
dents, 6 graduates.
3 focus group discussions.
Transcripts of focus group dis-
cussions and interviews.
Articulation of goals, aims, feel-
ings.
Unconscious, preexisting as-
sumptions.
Rule enforcement--encourage-
ment, modeling of goal-directed
behavior.
Creation of social contexts that
encourage goal-directed behav-
ior, safe atmosphere, sense of
purpose.
Discussion of how participants
"understand" the goals.
Goal-directed behavior is enacted
differently by different faculty,
staff, and students, due to preex-
isting assumptions about depart-
ment, and different focus of
attention.
INSTITUTIONAL RESEARCH 551
TABLE 2. Continued
Empirical-Analytic Interpretive
(quantitative) (qualitative)
Norms
Increase in faculty workload
hours.
"Indicators" of successful goal-
directed behavior are treated as
factual.
Normative findings: decrease in
faculty utterances in class discus-
sions, increase in participant-par-
ticipant interactions, number of
participants speaking, increase in
categories such as "substantive,"
"probing-monitoring," "manage-
ment," etc.
Increase in faculty/student inter-
actions outside of class.
Change in type of preparation
and feedback given to students,
thereby altering faculty percep-
tions of workload.
Empirical evidence sought of
goal-directed behavior.
Normative findings: interpretive
authority based on persuasive
justification (use of evidence and
explanation; questioning), and a
sense that text is open to differ-
ent readings, vs. faculty as au-
thority and guide, with the sense
that there is a single text mean-
ing.
lytic r e s e a r c h e r s ' a s s u m p t i o n that there is a " t r
u e " reality w o u l d direct t h e m to
d e t e r m i n e patterns o f relationships a m o n g n u m e r o
u s v a r i a b l e s (race, gender,
p r e v i o u s English classes, as well as c o u r s e grades, p o
s t g r a d u a t i o n j o b p l a c e -
m e n t and p e r f o r m a n c e ) and critical thinking and
generate laws e x p l a i n i n g h o w
the critical thinking is h a v i n g an i m p a c t (e.g., d o c u m
e n t i n g any increase or
decrease in faculty w o r k l o a d hours). T h e interpretive
researchers w o u l d resist
g e n e r a t i n g e x p l a n a t i o n s about h o w students are
e x p e r i e n c i n g the c o m p o n e n t ,
arguing that e a c h student and faculty m e m b e r constructs
a different understand-
ing and, therefore, g e n e r a l i z e d e x p l a n a t i o n s are
not possible.
T h e t w o p a r a d i g m s also h a v e different design
details. T h e goal o f interpretive
research is to get as close to d e s c r i b i n g the p a r t i c i p
a n t s ' understanding as possi-
ble. O b s e r v i n g three classes and i n t e r v i e w i n g a
select f e w students, faculty, and
graduates p r o v i d e s researchers with an o p p o r t u n i t y
to a n a l y z e transcripts in
detail. S u b s e q u e n t l y , they can c o m p a r e transcript
analysis (1) to see if the inter-
vention w a s i m p l e m e n t e d the s a m e w a y b y
faculty, (2) to d o c u m e n t h o w the
students and f a c u l t y understood the intervention, and (3) to
d o c u m e n t partici-
pant u n d e r s t a n d i n g o f the i m p a c t o f the critical
thinking focus. In contrast,
e m p i r i c a l - a n a l y t i c r e s e a r c h e r s m a y o b s e
r v e five classes and c o d e o b s e r v a t i o n
by a priori " i n d i c a t o r s " o f i n t e r a c t i v e / c o l l a
b o r a t i v e learning (i.e., c o d i n g certain
types o f interactions and c o m p a r i n g b e t w e e n
classes). B y a p p r o a c h i n g class-
r o o m o b s e r v a t i o n s in such a m a n n e r (i.e., a priori
indicators), these qualitative
552 HATHAWAY
methods (interviews) are being done from within empirical-
analytic framework.
This highlights the point that just doing an interview does not
necessarily indi-
cate one is engaged in qualitative (interpretive paradigm)
research. It is the
assumptions being made about methodology, ontology, and
epistemology that
determine whether the interview is truly qualitative. Faculty
interviews would
be coded the same way as well.
One of the major differences between the two approaches is the
implementa-
tion of an assessment instrument. Within the empirical-analytic
paradigm, the
assessment instrument items would be constructed beforehand
from the re-
searchers' ideas of what is critical thinking and what they think
should be
important outcomes of the new component. For this example,
the assessment
would be a pretest and posttest comparing critical thinking
skills between the
critical thinking group and control group. Following the
administration o f the
tests, the researchers could compare scores between the control
and experimen-
tal groups. They then would compare the "critical thinking"
group with a con-
trol group to document any statistically significant differences
between the two
groups on critical thinking.
As one can see from Table 2, the type of material obtained also
differs. As
mentioned previously, the interpretive approach would yield
transcripts o f class
discussions and faculty interviews. For the empirical-analytic
approach, we
would get observation notes coded for indicators o f
interactive/collaborative
leaning in addition to the survey and assessment instrument
information.
The form of analysis is one main area of difference between the
two ap-
proaches. Empirical-analytic researchers would perform
statistical analyses
comparing the assessment and survey scores of the control
group with that o f
the critical thinking group. The faculty interviews would be
taken at face value
as a statement of the faculty members' beliefs, attitudes, and
perceptions. In
addition, a mean for specific codes could be calculated and
correlations among
other variables o f interest could be attained, therefore
indicating that the re-
searchers were operating within the empirical-analytic
paradigm. In contrast,
interpretive researchers would analyze the classroom discussion
tapes and at-
tempt to articulate the goals, aims, and feelings o f the
participants. Interpre-
tive researchers would look to identify preexisting assumptions
on the part o f
the participants and document where they see adherence to the
critical think-
ing component guidelines and where they see instances o f
interactive/collab-
orative discussion. An interpretive analysis would attempt to
identify how the
students and faculty understood the class discussion and the
critical thinking
emphasis.
For this hypothetical program review, let us say that there were
no differ-
ences between the classes who engaged in the discussion
experience and those
who did not. In the empirical-analytic paradigm, this result
would be indicated
by no significant difference between the two groups on the
assessment instru-
ment. In addition, student grades in other writing-based courses
were not signif-
INSTITUTIONAL RESEARCH 553
icantly different between the two groups and graduates. The
interpretive re-
searcher would note that the intervention was implemented
differently by differ-
ent faculty due to preexisting assumptions about the text and the
intervention,
and the focus within the intervention of each faculty member.
The empirical-
analytic researcher would conclude that the critical thinking
focus was not sig-
nificantly improving upon what was already being done in the
department
whereas the qualitative researcher would conclude the focus was
not imple-
mented the way it was intended or that it was implemented
differently depend-
ing on the faculty member involved and student perception of
the department.
Finally, the norms adhered to are different between the two
paradigms. For
empirical-analytic researchers, the indicators for successful
impact o f the criti-
cal thinking focus are treated as factual. The evidence sought
are indicators
whereas interpretive researchers would look for empirical
evidence (observa-
tions) of critical reflection and how critical reflection is
"understood" by depart-
mental participants. For this example, let us say that there were
changes in the
discussion dynamics over the course of the observation period.
Empirical-ana-
lytic researchers would see a decrease in faculty comments over
the three
months of observations with a concomitant increase in student-
student interac-
tions. There is also a corresponding increase in different coding
categories. On
the other hand, interpretive researchers would notice that those
assuming the
authority during the discussion are the ones able to build more
persuasive argu-
ment for their text interpretations. Interpretive researchers
would note that the
intervention was implemented in slightly different ways with
one group assum-
ing responsibility and believing the texts to be open to multiple
interpretations.
Other groups would believe the text has one meaning and would
then rely on
the faculty to guide them to that meaning.
Overall, therefore, empirical-analytic researchers may conclude
that the criti-
cal thinking focus was not successful as indicated by the
nonsignificant find-
ings. In addition, they would see the increase in faculty
workload and might
conclude that the time invested in faculty is not translated into
the hoped-for
positive outcomes. In contrast, interpretive researchers would
indicate that they
round that different faculty articulated different understandings
of the critical
thinking focus, and therefore, implemented it differently. In
addition, interpre-
tive researchers would articulate faculty perceptions o f how
their workload
hours changed in terms of quality, that they were not just
spending more time,
but that time entailed more intense preparation for class to
ensure critical en-
gagement during discussion, as well as more attention to the
type o f feedback
given to students to facilitate their critical thinking skills.
IMPLICATIONS FOR INSTITUTIONAL RESEARCH
As in everyday life, institutional researchers need both modes o
f inquiry,
both ways of knowing, and both kinds of knowledge to advance
understanding
554 HATHAWAY
of our specific college or university. Most social scientists and
educational re-
searchers have typically advocated the use of one or the other
mode of inquiry.
In contrast, institutional researchers tend to rely on empirical-
analytic research
more regularly (Peterson, 1985a, 1985b). The reasons for the
preference for
empirical-analytic research are elusive. Perhaps it stems from
an artifact left
over from the social sciences or that the interpretive paradigm
has not yet been
seen as a viable and useful tool for understanding colleges and
universities.
This artifact entails the drive for institutional researchers to
have their work
viewed as based on "true science." Despite the success and
usefulness o f empir-
ical-analytic research in institutional research, its limitations
for the social sci-
e n c e s - a n d institutional r e s e a r c h - - h a v e become
increasingly apparent and o f
concern recently (Donmoyer, 1985; Eisner, 1981, 1983;
Firestone, 1987; Garri-
son, 1986; Giarelli and Chambliss, 1988; Howe, 1985, 1988;
Hutchinson, 1988;
Lincoln and Guba; 1985; Marshall, Lincoln, and Austin, 1991;
Peterson and
Spencer, 1993; Sherman and Webb, 1988). Empirical-analytic
research system-
atically overlooks critical features that often render the results
epistemologically
limited (Guba, 1987; Guba and Lincoln, 1981). Such features
include the defi-
nition of human action in specific settings, the actor's particular
definition o f
his/her situation, the human interest of the actor, and the
historical context o f
the situation. These issues are exemplified by the program
review example
described previously, particularly in reference to how the
faculty and students
understood the critical thinking focus. Each faculty member had
different per-
ceptions of the component, perceptions possibly influenced by
institutional cul-
ture and climate. The empirical-analytic approach neglects this
information.
These shortcomings can be overcome by qualitative research
techniques.
Interpretive research, however, may appear replete with
subjectivism and be
viewed by university administrators as having questionable
precision, rigor, or
credibility. It may be easier for an administrator to make a
decision based on
findings from a large sample rather tlaan trust a description o f
five case studies
or five in-depth interviews. University administrators need to
make decisions
on what they think is a "typical" or "average" case that holds
true across var-
ious university environments or in particular departments. One
cannot fault an
administrator for being uncomfortable basing a policy decision
on five or six
well-described cases when an empirical-analytic approach (with
accompanying
large database) might provide a better opportunity to generalize.
However, these
shortcomings can be overcome by empirical-analytic research.
Institutional research is currently characterized by two broad
approaches.
One is based on the assumptions that there exists a true reality,
whereas the
other is based on the assumption that there is no true reality but
a reality that is
constructed by shared understandings of participants. Both are
meth-
odologically precise. One utilizes techniques that produce
results generalizable
across contexts, but neglects the reality of institutions; and the
other provides
INSTITUTIONAL RESEARCH 555
the researcher with in-depth knowledge that often is not
generalizable. Al-
though educational researchers and social scientists have
debated the merits o f
combining the approaches, for institutional researchers, using
both approaches
can only strengthen the rigor from which they approach their
assigned tasks.
However, as we have seen, the choice embodies not a simple
decision between
methodologies, but an understanding of the philosophical
assumptions concern-
ing reality, the role of the researcher, what is knowledge, and
what are data. By
using both approaches, institutional researchers can strengthen
the results of
their endeavors. Institutional researchers need to identify and
refer to exemplars
of good research--research that is both methodologically precise
and grounded
in understanding of the philosophical assumptions undergirding
both ap-
proaches.
Empirical-Analytic and Interpretive Research Used Together
Institutional research studies require that both approaches be
simultaneously
pursued, either by different researchers or by a single
researcher. Of course, it
must be acknowledged that some questions are more amenable
to being investi-
gated by one approach, but using both enhances institutional
researchers' ability
to understand "what is going on." Each mode offers distinctive
advantages,
suggesting circumstances (type of problem, state of knowledge,
unit o f analysis,
researchers' role and purpose) in which one may be more
appropriate. Qualita-
tive research is more useful for exploring institutional
phenomena, articulating
participants' understandings and perceptions, and generating
tentative concepts
and theories that directly pertain to particular environments
(e.g., academic de-
partments, specific residence halls). By yielding in-depth
knowledge o f particu-
lar situations, it also more directly serves practitioners' and
administrators'
needs. The policies and/or decisions based on this type o f
interpretive informa-
tion may be more directly suited to the specifics o f the milieu
from which it
was derived. Quantitative research is suited to theory testing
and developing
universal statements. It also provides a "general" picture o f an
institutional
situation or academic department climate.
The choice of approach will depend on a number o f factors.
First, the choice
will depend on the institutional researcher's personal training,
cognitive style,
and preference. Second, the choice will no doubt depend on
those being re-
searched. For example, some individuals may be particularly
uncomfortable
with the idea of being interviewed and others may not like being
filmed so that
their interactions with students can be analyzed. Third, the
choice could also
depend on the intended audience for the research. Some
audiences may want a
concise summary of findings more easily produced by
empirical-analytic in-
quiry than in-depth articulations of subjects' realities. Fourth,
the choice may
depend on time and m o n e y - - i s s u e s often on the minds o
f institutional re-
556 HATHAWAY
searchers. Often, people assume that qualitative research
involves a larger in-
vestment of time. In reality, the time needs for both quantitative
and qualitative
research may be close to equal, just distributed differently. For
quantitative
research, much of the time is spent developing surveys,
distributing them, com-
piling the data, analyzing the data, and presenting the results.
Interpretation o f
the results is a relatively small portion of the overall time spent
on a quantita-
tive study. On the other hand, the majority of time in qualitative
research is
spent on interpretation, analyzing pages o f transcripts, viewing
videotapes over
and over, while the time spent on the collection of data is a
relatively small
portion of the overall time commitment. Finally, one cannot
ignore the history
of the institutional research office, what is the preferred
research approach, and
what those using the office prefer in terms of research. One
cannot suddenly
switch from quantitative to qualitative methods without
checking the political
ramifications o f doing so.
In contrasting the two research approaches, the attempt has been
to discuss
the limitations associated with different ways of knowing. In
light of these
limitations, to continue the exclusive use of one approach that
has characterized
institutional research will produce limited results--that is,
results that are meth-
odologically rigorous but at times inappropriate. Institutional
researchers' abili-
ties to grasp the breadth, depth, and richness of college and
university life are
hampered by allegiance to a single mode of inquiry.
Institutional researcher
efforts to develop comprehensive pictures of college and
university phenomena
are handicapped when only one (either quantitative or
qualitative) approach is
advocated and practiced. We can survey regarding the benefits o
f a new depart-
mental focus and find that the new approach is not increasing
student perfor-
mance on particular skills. We could find out that there is no
improvement, but
we will not know exactly why there is no improvement. A
survey could point
an institutional researcher to the problem, and in-depth
interviews with some
students could provide the information necessary to begin to
explain the "no
improvement" finding.
Institutional researchers can alternate between the two
approaches. Peterson
(1985a) advocates alternating between quantitative and
qualitative research,
using findings generated from one approach to generate research
questions for
the other. In the previous example, there was no significant
improvement in
skills following a new departmental focus. Using this
information, an institu-
tional researcher could interview students to see how they
experienced the new
focus. Through these interviews a researcher could identify
common themes
(e.g., students feel positive about the focus; however, it is not
being imple-
mented consistently) that could be used to generate
questionnaire items for
additional surveys. By alternating between the two modes, an
institutional re-
searcher could get a more accurate picture of the new
departmental focus that
may not have been possible using only one approach.
INSTITUTIONAL RESEARCH 557
C O N C L U S I O N
A m a j o r r e a s o n w h y r e s e a r c h m e t h o d o l o g y
in i n s t i t u t i o n a l r e s e a r c h is such an
e x c i t i n g a r e a is that i n s t i t u t i o n a l r e s e a r c h
is not i t s e l f a d i s c i p l i n e . I n d e e d , it is
hard to d e s c r i b e i n s t i t u t i o n a l research. H o w e v
e r , it is a f i e l d c o n t a i n i n g p h e -
nomena, events, institutions, p r o b l e m s , p e r s o n s , and
p r o c e s s e s , w h i c h t h e m -
s e l v e s c o n s t i t u t e the raw m a t e r i a l for i n q u i r
i e s o f m a n y kinds. M a n y o f t h e s e
inquiries p r o v i d e the f o u n d a t i o n f r o m w h i c h to
d e v e l o p p o l i c i e s and institu-
tional i n t e r v e n t i o n s .
Due to the c o m p l e x i t y o f i n s t i t u t i o n a l r e s e a
r c h , the c h o i c e o f r e s e a r c h ap-
proach to a q u e s t i o n s h o u l d not be taken lightly. Each
a p p r o a c h to an institu-
tional r e s e a r c h p r o b l e m or q u e s t i o n b r i n g s
its o w n u n i q u e p e r s p e c t i v e . E a c h
sheds its o w n d i s t i n c t i v e light on the s i t u a t i o n s
and p r o b l e m s i n s t i t u t i o n a l re-
s e a r c h e r s s e e k to u n d e r s t a n d ( P e t e r s o n and
S p e n c e r , 1993). T h e issue is not
c h o o s i n g a q u a l i t a t i v e o r n o n q u a l i t a t i v e
a p p r o a c h , but it is d e c i d i n g how an
institutional r e s e a r c h e r a p p r o a c h e s the w o r l d .
C h o o s i n g an a p p r o a c h is not a
d e c i s i o n b e t w e e n m e t h o d s ; each c h o i c e is r e
p l e t e w i t h u n d e r l y i n g a s s u m p t i o n s
about reality. R e s e a r c h m e t h o d s are not m e r e l y d
i f f e r e n t w a y s o f a c h i e v i n g the
s a m e end. T h e y c a r r y w i t h t h e m d i f f e r e n t w
a y s o f a s k i n g q u e s t i o n s and often
different c o m m i t m e n t s to e d u c a t i o n a l and s o c i
a l i d e o l o g i e s . T h e a t t e m p t here
has b e e n to c l a r i f y the d i s t i n c t i o n b e t w e e n
the two a p p r o a c h e s so that the two
a p p r o a c h e s are v i e w e d as m o r e than s i m p l y a l
t e r n a t i v e m e t h o d s .
A s i n s t i t u t i o n a l r e s e a r c h e r s e m p l o y t h e i
r crafts, they m a k e a m u l t i t u d e o f
d e c i s i o n s c o n c e r n i n g r e s e a r c h m e t h o d s .
T h e s e d e c i s i o n s h a v e a d i r e c t i m p a c t on
how they m a k e m e a n i n g and h o w r e a l i t y is s t r u
c t u r e d and u n d e r s t o o d b y insti-
tutional r e s e a r c h e r s and their c o n s t i t u e n c i e s .
In s o m e w a y s , the c h o i c e o f q u a n -
titative and q u a l i t a t i v e a p p r o a c h e s c r e a t e s
the r e a l i t y w e are a t t e m p t i n g to d i s -
cover. By m a k i n g a c h o i c e b e t w e e n q u a n t i t a t
i v e o r q u a l i t a t i v e inquiry, " t o a
s i g n i f i c a n t extent, w e c h o o s e our w o r l d v i e w
" ( A l l e n d e r , 1986, p. 188). F o r
institutional r e s e a r c h e r s , it is not j u s t a c h o i c e b
e t w e e n " d o i n g i n t e r v i e w s " or
" c o n d u c t i n g a s u r v e y " ; it is a c h o i c e b e t w e e
n a s s u m p t i o n s a b o u t the w o r l d .
NOTES
1. Definitions of some terms are in order here.
Commensurability refers to the ability to compare
philosophical underpinnings without a neutral frame of
reference. Epistemology is the investi-
gation or study of the origin, structure, methods, and validity of
knowledge (Runes, 1983).
Ontology is a theory as to what exists (Urmson and Ree, 1989)
or the assumptions about
existence underlying any conceptual scheme, theory, or system
of ideas (Flew, 1984). Phenom-
enology is the study of how the world is experienced from the
actor's/subject's own frame of
reference (Patton, 1980). Hermeneutics is the art and science of
interpreting the meaning of
texts which stresses how prior understandings and prejudices
shape the interpretive process
(Runes, 1983). Dialectic refers to a process through which what
is known emerges within an
interaction between the knower and what is to be known.
558 HATHAWAY
2. Please see Guba and Lincoln (1988) for an in depth treatment
of the distinction between
method and methodology.
3. Moss (1990) provides a brief and useful discussion about the
distinction among the terms
incompatible, incommensurable, and incomparable. The reader
is directed to Moss's comments
as well as to Bernstein's (1983) in-depth discussion concerning
the definitions of these terms.
4. With the advent and development of critical theory and
postmodernism, some (Lather, 1991a,
1991b) would argue that we are currently immersed in a crisis
over what it means to do
research. The reader is directed to Darder (1991), Giroux
(1988), and Gore (1993) for descrip-
tions of critical theory, and Bauman (1992), Giroux (1991), and
Rosenau (1992) for descrip-
tions of postmodernism.
5. For this paper, the distinction between the quantitative and
qualitative paradigms is being used
as a heuristic device. One must note, however, that this
distinction may oversimplify the var-
ious philosophical differences even within the two paradigms.
6. To distinguish the different interpretive approaches is beyond
the purview o f this paper. The
reader is directed to Denzin and Lincoln's (1994) Handbook of
Qualitative Research and Lancy
(1993) for in-depth explorations of the distinctions.
7. Firestone (1987) argues that the two paradigms can also be
distinguished by differing rhetoric.
In essence quantitative and qualitative methods "lend
themselves to different kinds of rhetoric"
(p. 16). Subsequently, each method type uses different
presentation techniques and means o f
persuasion to express assumptions about methodology,
ontology, and epistemology and to con-
vince readers about conclusions.
8. For the sake of this paper, methodology, ontology, and
epistemology have been separated for
convenience and clarity. Generally, however, these concepts are
so intertwined that discussing
one almost necessitates discussing one or both of the other. For
example, discussing what each
paradigm believes to be "reality" (ontology) almost dictates
what can be known (epistemology)
about that reality and how that reality can be measured
(methodology).
9. The degree of engagement varies depending on the qualitative
approach being used. For exam-
ple, nonparticipant observers are not actively involved in the
situation whereas participant
observers attempt to assume a role to understand the reality as
constructed and comprehended
by those in the situation.
10. Smith and Heshusius (1986) raise a common concern among
many educational researchers that
qualitative research should not be thought o f as just a
procedural variation of quantitative
research. The reader should note this caveat when entertaining
the notion of using qualitative
data collection and analysis as an avenue from which to
generate categories for quantitative
research.
11. It would be misleading to imply that qualitative research is
not concerned with generalizability.
Firestone (1993) highlights the various arguments for, and the
types o f generalizability within,
qualitative research, and Kirk and Miller (1986) discuss
reliability and validity in qualitative
research.
REFERENCES
A l l e n d e r , J. S. ( 1 9 8 6 ) . E d u c a t i o n a l r e s e a r
c h : A p e r s o n a l a n d s o c i a l p r o c e s s . Review of
Educational Research 56(2): 1 7 3 - 1 9 3 .
B a n k s , J. A . ( 1 9 8 8 ) . Multiethnic Education, 2 n d ed.
B o s t o n : A l l y n & B a c o n .
B a u m a n , Z. (1992). Intimations of Postmodernity. N e w
York: R o u t l e d g e .
B e r n s t e i n , R. J. ( 1 9 7 6 ) . The Restructuring of Social
and Political Theory. P h i l a d e l p h i a :
T h e U n i v e r s i t y o f P e n n s y l v a n i a P r e s s .
B e r n s t e i n , R. J. (1983). Beyond Objectivism and
Relativism. Science, Hermeneutics, and
Praxis. P h i l a d e l p h i a : T h e U n i v e r s i t y o f P e n
n s y l v a n i a P r e s s .
INSTITUTIONAL RESEARCH 559
Bohannon, T. R. (1988). Applying regression analysis to
problems in institutional re-
search. In B. D. Yancey (ed.), Applying Statistics in
Institutional Research, 43-60. San
Francisco: Jossey-Bass, Publishers.
Bunda, M. A. (1991). Capturing the richness of student
outcomes with qualitative tech-
niques. In D. M. Fetterman (ed.), Using Qualitative Methods in
Institutional Research,
pp. 35-47. San Francisco: Jossey-Bass, Publishers.
Cziko, G. A, (1989). Unpredictability and indeterminism in
human behavior: Arguments
and implications for educational research. Educational
Researcher 18(3): 17-25.
Darder, A. (1991). Culture and Power in the Classroom. A
Critical Foundation for Bi-
cultural Education. New York: Bergin & Garvey.
Denzin, N. K. (1971). The logic of naturalistic inquiry. Social
Forces 50: 166-182.
Denzin, N. K., and Y, S. Lincoln (1994) (eds.). Handbook o f
Qualitative Research.
Thousand Oaks, CA: Sage Publications, Inc.
Dewey, 3. (1933). How We Think. Massachusetts: D. C. Heath.
Dilthey, W. (1990). The rise of hermeneutics. In G. L. Ormiston
and A. D. Schrift (eds.),
Tire Hermeneutic Tradition. From Ast to Ricoeur (pp. 101-114).
Albany: State Uni-
versity of New York Press.
Donmoyer, R. (1985). The rescue from relativism: Two failed
attempts and an alternative
strategy. Educational Researcher 14(10): 13-20.
Eisner, E. W. (1981). On the differences between scientific and
artistic approaches to
qualitative research. Educational Researcher 10(4): 5 - 9 .
Eisner, E. W. (1983). Anastasia might still be alive, but the
monarchy is dead. Educa-
tional Researcher 12(5): 13-14, 23-24.
Fetterman, D. M. (1991). Qualitative resource landmarks. In D.
M. Fetterman (ed.),
Using Qualitative Methods in Institutional Research, pp. 81-84.
San Francisco: Jos-
sey-Bass, Publishers.
Fincher, C. (1985). The art and science of institutional research.
In M. W. Peterson and
M. Corcoran (eds.), hzstitutional Research in Transition, pp. 17-
37. New Directions
for Institutional Research. San Francisco: Jossey-Bass Inc.,
Publishers.
Firestone, W. A. (1987). Meaning in method: The rhetoric of
quantitative and qualitative
research. Educational Researcher 16(7): 16-21.
Firestone, W. A. (1993). Alternative arguments for generalizing
from data as applied to
qualitative research. Educational Researcher 22(4): 16-23.
Flew, A. (1984). A Dictionary o f Philosophy. London: The
Macmillan Press Ltd.
Garrison, J. W. (1986). Some principles of postpositivistic
philosophy of science. Educa-
tional Researcher 15(9): 12-18.
Geertz, C. (1973). The interpretation o f Cuhures. New York:
Basic Books.
Giarelli, J. M., and Chambliss, J. J. (1988). Philosophy of
education as qualitative in-
quiry. In R. R. Sherman and R. B. Webb (eds.), Qualitative
Research in Education:
Focus and Methods, pp. 30-43. New York: The Falmer Press.
Giroux, H. A. (1988). Schooling and the Struggle for Public
Life. Critical Pedagogy in
tire Modern Age. Minneapolis, MN: University of Minnesota
Press.
Giroux, H. A. (1991) (ed.). Postmodernism, Feminism, and
Cultural Polities. Albany,
NY: State University of New York Press.
Gordon, E. W., E Miller, and D. Rollock (1990). Coping with
communicentric bias in
knowledge production in the social sciences. Educational
Researcher 19(3): 14-19.
Gore, J. M. (1993). The Struggle for Pedagogies. Critical and
Feminist Discourses as
Regimes o f Truth. New York: Routledge.
Guba, E. (1987). What have we learned about naturalistic
evaluation? Evaluation Prac-
tice 8(1): 23-43.
560 HATHAWAY
Guba, E., and Y. Lincoln (1981). Effective Evaluation. San
Francisco: Jossey-Bass.
Guba, E. G., and Y. S. Lincoln (1988). Do inquiry paradigms
imply inquiry meth-
odologies? In D. M. Fetterman (ed.), Qualitative Approaches to
Evaluation in Educa-
tion: The Silent Scientific Revolution, pp. 89-115. New York:
Praeger Publishers.
Gubrium, J. (1988). Analyzing Field Reality. Newbury Park,
CA: Sage.
Habermas, J. (1971). Knowledge and Human Interest. Boston,
MA: Beacon.
Habermas, J. (1988). On the Logic of the Social Sciences (S. W.
Nicholsen and J. A.
Stark, trans.). Cambridge, MA: The MIT Press.
Hall, E. T. (1976). Beyond Culture. New York: Doubleday.
Hcrriott, R. E., and W. A. Firestone (1983). Multisite
qualitative policy research: Opti-
mizing description and generalizability. Educational Researcher
12(2): 14-19.
Heyl, J. D. (1975). Paradigms in social science. Society 12(5):
61-67.
Hinkle, D. E., G. W. McLaughlin, and J. T. Austin (1988).
Using log-linear models in
higher education research. In B. D. Yancey (ed.), Applying
statistics in Institutional
Research, pp. 23-42. San Francisco: Jossey-Bass, Publishers.
Howe, K. R. (1985). Two dogmas of educational research.
Educational Researcher
14(8): 10-18.
Howe, K. R. (1988). Against the quantitative-qualitative
incompatibility thesis or
dogmas die hard. Educational Researcher 17(8): 10-16.
Hutchinson, S. A. (1988). Educational and grounded theory. In
R. R. Sherman and R. B.
Webb (eds.), Qualitative Research in Education: Focus and
Methods, pp. 123-140.
New York: The Falmer Press.
Jacob, E. (1988). Clarifying qualitative research: A focus on
traditions. Educational Re-
searcher (17(1): 16-24.
James, W. (1918). The Principles o f Psychology. New York:
Dover.
Jcnnings, L. W., and D. M. Young (1988). Forecasting methods
for institutional research.
In B. D. Yancey (ed.), Applying Statistics in hzstitutional
Research, pp. 77-96. San
Francisco: Jossey-Bass, Publishers.
Kaplan, A. (1964). The Conduct oflnquiry. San Francisco:
Chandler.
Kent, T. (1991). On the very idea of a discourse community.
College Composition and
Communication 42(4): 425-445.
Kirk, J., and M. L. Miller (1986). Reliability and Validity in
Qualitative Research. Bev-
erly Hills, CA: Sage Publications, Inc.
Kuhn, T. S. (1962). The structure o f Scientific Revolutions.
Philadelphia: The University
of Pennsylvania Press.
Kuhn, T. S. (1970). The Structure o f Scientific Revolutions,
2nd ed. Chicago: University
of Chicago Press.
Kuhn, T. S. (1974). Second thoughts on paradigms. Reprinted in
The Essential Tension:
Selected Studies in Scientific Tradition and Change. Chicago:
University of Chicago
Press.
Kvale, S. (1983). The qualitative research interview: A
phenomenological and a her-
meneutical mode of understanding. Journal o f
Phenomenological Psychology 14(2):
171-196.
Lancy, D. (1993). Qualitative Research in Education. New
York: Longman.
Lather, P. 1991a). Getting Smart: Feminist Research and
Pedagogy Within the Post-
modern. New York: Routledge.
Lather, P. (1991b). Deconstructing/deconstructive inquiry: The
politics of knowing and
being known. Educational Theory 41(2): 153-173.
Lewin, K. (1951). Field Theory in Social Science. New York:
Harper.
Lincoln, Y. S., and E. G. Guba (1985). Naturalistic Inquiry.
Beverly Hills: Sage Publications.
INSTITUTIONAL RESEARCH 561
Marshall, C., Y. S. Lincoln, and A. E. Austin (1991).
Integrating a qualitative and quan-
titative assessment of the quality of academic life: Political and
logistical issues. In
D. M. Fetterman (ed.), Using Qualitative Methods in
Institutional Research, pp. 6 5 -
80. San Francisco: Jossey-Bass, Publishers.
McCracken, G. (1988). The Long Interview. Newbury Park, CA:
Sage Publications, Inc.
Merton, R. (1972). Insiders and outsiders: A chapter in the
sociology of knowledge. In
Varieties of Political Expression in Sociology. Chicago: The
University of Chicago
Press.
Miles, M. B., and A. M. Huberman (1984). Drawing valid
meaning from qualitative
data: Toward a shared craft. Educational Researcher 13(5): 20-
30.
Mishler, E. G. (1986). Research Interviewing. Context and
Narrative. Cambridge, MA:
Harvard University Press.
Moss, P. A. (1990, April). Multiple Triangulation in Impact
Assessment: Setting the
Context. Remarks prepared for oral presentation in P. LeMahieu
(Chair), Multiple
triangulation in impact assessment: The Pittsburgh discussion
project experience.
Symposium conducted at the annual meeting of the American
Research Association,
Boston, Massachusetts.
Packer, M. J., and R. B. Addison (1989). Introduction. In M. J.
Packer and R. B. Ad-
dison (eds.), Entering the Circle: Hermeneutic Investigation in
Psychology, pp. 13-36.
Albany: State University of New York Press.
Patton, M. Q. (1980). Qualitative Evaluation Methods. Beverly
Hills, CA: Sage Publica-
tions, Inc.
Petcrson, M. W. (1985a). Emerging developments in
postsecondary organization theory
and research: Fragmentation or integration. Educational
Researcher 14(3): 5 - 1 2 .
Petcrson, M. W. (1985b). Institutional research: An
evolutionary perspective. In M. W.
Peterson and M. Corcoran (eds.), Institutional Research in
Transition, pp. 5 - 1 5 . New
Directions for Institutional Research, no. 46. San Francisco:
Jossey-Bass Inc., Pub-
lishers.
Peterson, M. W., and M. G. Spencer (1993). Qualitative and
quantitative approaches to
academic culture: Do they tell us the same thing? Higher
Education: Handbook of
Theory and Research, Vol. IX, pp. 344-388. New York: Agathon
Press.
Phillips, D. C. (1983). After the wake: Postpositivistic
educational thought. Educational
Researcher 12(5); 4 - 1 2 .
Pike, K. L. (1967). Language in Relation to a Unified Theory of
the Structure of Human
Behavior. The Hague: Mouton.
Rabinow, P., and W. M. Sullivan (1987). The interpretive turn:
A second look. In P.
Rabinow and W. M. Sullivan (eds.), Interpretive Social Science.
A Second Look, pp.
1 - 3 0 . Berkeley, CA: University of California Press.
Rogers, C. R. (1951). Client-Centered Therapy. Boston:
Houghton.
Rosenau, P. M. (1992). Post-Modernism and the Social
Sciences: Insights, Inroads, and
Intrusions. Princeton, N J: Princeton University Press.
Rossman, G. B., and B. L. Wilson (1985). Numbers and words.
Combining quantitative
and qualitative methods in a single large-scale evaluation study.
Evaluation Review
9(5): 627-643.
Runes, D. D. (1983). Dictionary of Philosophy. New York:
Philosophical Library, Inc.
Schultz, A. (1967). The Phenomenology of the Social World.
Evanston, IL: Northwestern
University Press.
Sherman, R. R., and R. B. Webb (1988). Qualitative research in
education: A focus. In
R. R. Sherman and R. B. Webb (eds.), Qualitative Research in
Education: Focus and
Methods, pp. 2 - 2 1 . New York: The Falmer Press.
562 HATHAWAY
Shulman, L. S. (1981). Disciplines of inquiry in education: An
overview. Educational
Researcher 10(6): 5 - 1 2 , 23.
Smith, J. K. (1983a). Quantitative versus qualitative research:
An attempt to clarify the
issue. Ed,~cational Researcher 12(3): 6-13.
Smith, J. K. (1983b). Quantitative versus interpretive: The
problem of conducting social
inquiry. In E. House (ed.), Philosophy of Evaluation, pp. 27-52.
San Francisco: Jos-
sey-Bass Publishers.
Smith, J. K. (1984). The problem of criteria for judging
interpretive inquiry. Educational
Evah~ation and Policy Analysis 6(4): 379-391.
Smith, J. K., and L. Heshusius (1986). Closing down the
conversation: The end of the
quantitative-qualitative debate among educational inquirers.
Educational Researcher
15(1): 4 - 1 2 .
Soltis, J. E (1984). On the nature of educational research.
Educational Researcher
13(10: 5 - 1 0 .
Stanfield, J. H. (1985). The ethnocentric basis of social science
knowledge production. In
E. W. Gorden (ed.), Review of Research in Education, vol. 12,
pp. 387-415. Washing-
ton, DC: American Educational Research Association.
Taylor, C. (1987). Interpretation and the science of man. In P.
Rabinow and W. M.
Sullivan (eds.), Interpretive Social Science. A Second Look, pp.
33-81. Berkeley, CA:
University o f California Press.
Tierney, W. G. (1991). Utilizing ethnographic interviews to
enhance academic decision
making. In D. M. Fetterman (ed.), Using Qualitative Methods in
Instit,ttional Re-
search, pp. 7 - 2 2 . San Francisco: Jossey-Bass, Publishers.
Urmson, J. O., and J. Ree (1989) (eds.). The Concise
Encyclopedia of Western Philoso-
phy and Philosophies. Boston: Unwin Hyman.
Yancey, B. D. (1988a). Exploratory data analysis methods for
institutional researchers. In
B. D. Yancey (ed.), Applying Statistics in Institutional
Research, pp. 97-110. San
Francisco: Jossey-Bass, Publishers.
Yancey, B. D. (1988b). Institutional research and the classical
experimental paradigm. In
B. D. Yancey (ed.), Applying Statistics in Institutional
Research, pp. 5 - 1 0 . San Fran-
cisco: Jossey-Bass, Publishers.
Received May 23, 1994,
“RESEARCH TOPIC- LEARNING AND DEVELOPMENT IN
CORPORATE AMERICA”
The Role of Your Paradigms-
At this stage in the overall conceptualizing of your data-driven
project, how do you see the role of qualitative, quantitative,
and/or mixed-methods research paradigms adding to (or perhaps
detracting from) your work?
Based on your decisions for the type(s) of data you would
collect, what critiques would you anticipate from others at the
institution?
Would other perceive your research as providing valuable
insights or actionable information?
How would you respond?
Be sure to ground your discussion in our assigned readings for
the week as you identify not only what type(s) of data you
would gather, but also what the implications of these data may
be.
· Your initial post (approximately 200-250 words) should
address each question in the discussion directions

Running head SAMPLE PAPER 1 A Sample Paper for the Purpos.docx

  • 1.
    Running head: SAMPLEPAPER 1 A Sample Paper for the Purpose of Correct Formatting Student Name Liberty University Per the Publication Manual of the American Psychological Association (APA; 6th edition), double-space the entire paper (p. 229), except with charts or tables. Do not add any extra spacing. Use Times New Roman, 12-point font. Do not use bold except for headings as necessary (see page 62 of your APA manual). Margins are set for 1" on top, bottom, and sides. All page references will be to the APA manual, 6th edition. Add two spaces after punctuation at the end of each sentence, except in the reference list, for the sake of readability (pp. 87-88). The header on the cover page is different from the headers on the rest of the paper. Only the cover page header includes the words Running head (without the italics; p. 41). The header is flush left but the page numbers are flush right (see bottom of p. 229). Make sure the header font is the same as the rest of the paper. Handouts on how to format the cover page (as well as other handouts) are available on the
  • 2.
    Online Writing Center’swebpage: http://www.liberty.edu/index.cfm?PID=17176, and a superb YouTube video demonstration that provides visualized step-by-step instructions for setting a paper up in proper APA format is available at https://www.youtube.com/watch?v=KUjhwGmhDrI Note: Comments inside boxes are not part of the formatting of the paper. Section or page number references to the APA manual are denoted in parentheses throughout. Most citations within the body of this paper are fictional, for instructional purposes only, but are also included in the reference list for illustrative purposes of correlating citations in the body of the paper with resources in the reference list. . Note: Center the following information in the top half of the page: title, your name, and school name (2.01, p. 23; 41). Some professors require the course title and section, the instructor’s name, and the date; add those on the lines beneath the required title page information. Do not use contractions in formal papers—in either the title or the body of the paper (i.e., use “do not” rather than “don’t”). Titles should include no more than 12 words. Titles use upper and lowercase letters (i.e., “title case;”
  • 3.
    20.1, p. 23;see also 4.15 on pp. 101-102). Prepared by Christy Owen, Brian Aunkst, and Dr. Carmella O’Hare. Last updated June 28, 2016. http://www.liberty.edu/index.cfm?PID=17176 https://www.youtube.com/watch?v=KUjhwGmhDrI SAMPLE PAPER 2 Abstract Begin your abstract at the left margin (2.04 on p. 27; see also p. 229). This is the only paragraph that should not be indented. Unless otherwise instructed, APA recommends an abstract be between 150–250 words (p. 27). It should not contain any citations or direct quotes. This should be a tight, concise summary of the main points in your paper, not a step-by-step of what you plan to accomplish in your paper. Avoid phrases such as “this paper will,” and just structure your sentences to say what you want to say. The following three sentences exemplify a good abstract style: There are many similarities and differences between the codes of ethics for the ACA and the AACC. Both include similar mandates in the areas of ----, - --, and ---. However, each differs
  • 4.
    significantly in theareas of ---, ---, and ---. For more detailed information, see “Writing an Abstract” at http://www.liberty.edu/academics/graduate/writing/?PID=12268 This is just now at 168 words, so take a moment to eyeball how brief your abstract must be. Think of your paper as a movie, and the abstract as the summary of the plot that you would share to draw people’s interest into wanting to come and see your movie. Same thing: you want to really hook and intrigue them. What you have to say is important! Still only at 221 words here; remember to try to stay under 250, unless your professor advises otherwise. The keywords noted below highlight the search terms someone would use to find your paper in a database; they should be formatted as shown (indented ½”, with the word “Keywords” in italics, and the few key words in normal print, separated by a comma. Keywords: main words, primary, necessary, search terms http://www.liberty.edu/academics/graduate/writing/?PID=12268
  • 5.
    SAMPLE PAPER 3 ASample Paper for the Purpose of Correct Formatting The title of your paper goes on the top line of the first page of the body. It should be centered, unbolded, and in title case (all major words—usually those with four+ letters—should begin with a capital letter) --- see figure 2.1 on p. 42 and 4.15 on pp. 101-102. You can either give a brief introductory paragraph below that or go straight into a Level 1 heading. In APA format, the Introduction never has a heading (simply begin with an introductory paragraph without the word "Introduction"); see first paragraph of section 2.05 on page 27, as well as the first sentence under the bolded headings on page 63 of your APA manual (American Psychological Association [APA], 2010). As shown in the previous sentence, use brackets to denote an abbreviation within parentheses (third bullet under 4.10). Write out acronyms the first time mentioned, such as American Psychological Association for APA, and then use the acronym throughout the body of the paper (4.22; note the section on underuse, however, at the
  • 6.
    top of p.107). Basic Rules of Scholarly Writing Most beginning students have difficulty learning how to write papers and also format papers correctly using the sixth edition of the APA manual (APA, 2010). However, the Liberty University Online Writing Center’s mission includes helping students learn how to be autonomous, proficient writers, and thus this sample paper is designed so it cannot be used as a template for inserting the correct parts. For the purpose of instruction, this paper will use second person (you, your), but third person (this author) must be used in most student papers. First person (I, me, we, us, our) is not generally permitted in scholarly papers. Students should refrain from using first or second person in academic courses (even though the APA manual appears to encourage this in other writing venues) unless the assignment instructions clearly permit such (as SAMPLE PAPER 4
  • 7.
    in the caseof personal reflection sections or life histories). Though some written assignments will not require an abstract, understand that APA generally requires one unless otherwise stated in your assignment instructions or grading rubric. Heading Levels—Level 1 This sample paper uses primarily one level of headings (Level 1), so each heading presented herein is centered and in boldface. APA style, however, has five heading levels, which will be demonstrated briefly for visual purposes. See page 62 of your APA manual (APA, 2010) if employing more than one level. Level 1 headings are bolded and in title case — capitalize each major word (usually those with four or more letters), including hyphenated compound words. Four-Year Pilot Study on Attachment Disorders, and Self-Awareness of Pollen are examples of headings with compound words. Do not capitalize articles (a, an, the) in headings unless they begin a title or follow a colon. Level 2 Heading Level 2 headings are bolded, in title case, and left-justified.
  • 8.
    The supporting information isposed in standard paragraph form beneath it. Never use only one of any level of heading. You must use two or more of any level you use, though not every paper will require more than one level. Level 3 heading. Is bolded, indented ½”, in sentence case (only the first word should begin with a capital letter in most cases), and ends with a period. Add two spaces, then begin typing your content on the same line, as presented in this paragraph. Level 4 heading. Same as Level 3, except italicized, too. Level 5 heading. Same as Level 4, but unbolded. Despite heavy writing experience, this author has never used Level 5 headings. SAMPLE PAPER 5 Annotated Bibliographies, Tables of Contents, and Outlines A few requirements in various assignments are not addressed in the APA manual, such as outlines, tables of content, and annotated bibliographies. APA
  • 9.
    does not regulateevery type of paper, including those forms. In those cases, follow your professor’s instructions and the grading rubric for the content and format of the outline or annotations, and use standard APA formatting for all other elements (such as running head, title page, body, reference list, 1" margins, double-spacing, Times New Romans 12-point font, etc.). That being said, when I organize outlines in APA format, I set my headings up in the proper levels (making sure there are at least two subheadings under each level), and then I use those to make the entries in the outline. Level 1 headings become Roman Numbers (I, II, III), Level 2 headings become capital letters (A, B, C), Level 3 headings become numbers (1, 2, 3), and Level 4 headings become lowercase letters (a, b, c). Some courses require “working outlines,” which are designed to have the bones and foundational framework of the paper in place (such as title page, abstract, body with title and headings, and references), without all the supporting “meat” that fills out and forms a completed paper
  • 10.
    Appendices Appendices, if any,are attached after the reference list (see top of p. 230). You must refer to them in the body of your paper so that your reader knows to look there (see top of p. 39). The word “Appendix” is singular; use it to refer to individual appendices. I am attaching a sample Annotated Bibliography as a visual aid in “Appendix A.” You will see that I included the title “Appendix A” at the top of the page and formatted it in standard APA format beneath that. SAMPLE PAPER 6 Crediting Your Sources Paraphrasing is rephrasing another’s idea in one’s own words. Quoting is using another’s exact words. Both need to be cited; failure to do so constitutes plagiarism. Liberty University also has a strict policy against a student using the same paper (or portions thereof) in more than one class or assignment, which it deems “self-plagiarism.”
  • 11.
    Students who wantto cite their own prior work must cite and reference it just like any other source; see example in Owen (2012). Include the author(s) and year for paraphrases and the author(s), year, and page or paragraph number for direct quotes. Page numbers should be used for any printed material (books, articles, etc.), and paragraph numbers should be used in the absence of page numbers (online articles, webpages, etc.; 6.05, pp. 171-172). Use p. for one page and pp. (not italicized in your paper) for more than one. Use para. for one paragraph and paras. (also not italicized in your paper) for two or more. For example: (Perigogn & Brazel, 2012, pp. 12–13) or (Liberty University, 2015 para. 8). Section 6.04 of the APA (2010) manual says, “When paraphrasing or referring to an idea contained in another work, you are encouraged to provide a page or paragraph number, especially when it would help an interested reader locate the relevant passage in a long or complex text” (p. 171). When naming authors in the text of the sentence itself (called a narrative
  • 12.
    citation), use theword “and” to connect them. For example, “Allen, Bacon, and Paul (2011) contemplated that . . .” Use an ampersand (&) in place of the word “and” in parenthetical citations and reference lists: (Allen, Bacon, & Paul, 2011). APA’s (2010) official rule is that you must cite your source every single time you refer to material you gleaned from it (pp. 15-16). You can vary your sentence structure to include both narrative and parenthetical citations in order to avoid redundancy. There is, however, an SAMPLE PAPER 7 unofficial trend amongst some professors who require their students to cite their sources only once per paragraph (the first time you refer to it, not merely at the end of the paragraph, which can be interpreted as an afterthought), despite this being in conflict with standard APA formatting. You will want to clarify which your professor prefers; if in doubt, cite every time. That being said, APA (2010) has a special rule that excludes the year of publication in
  • 13.
    narrative in-text citations(when you name the authors in the text of the sentence itself), after the first citation in each paragraph ... provided that first citation is narrative (and not parenthetical). It should continue to appear in all parenthetical citations (see sections 6.11 and 6.12, pp. 174- 175). If the first citation in the paragraph is parenthetical, then ALL citations must include the year. The two examples in 6.11 on pp. 174-175 are subtle, but if you look carefully, you will be able to discern this for yourself. If the material you cited was referred to in multiple resources, separate different sets of authors with semicolons, arranged in the order they appear (alphabetically by the first author’s last name) in the reference list (Carlisle, n.d.; Prayer, 2015). Periods are placed after the closing parenthesis, except with indented (blocked) quotes. Quotes that are 40 or more words must be blocked, with the left margin of the entire quote indented ½ inch. Maintain double-spacing of block quotes. APA prefers that you introduce quotes, but note that the punctuation falls at the
  • 14.
    end of thedirect quote, with the page number outside of that (which is contrary to punctuation for non-blocked quotes). For example, Alone (2008) claims (note that there are no quotation marks for block quotes, as shown below): Half of a peanut butter sandwich contains as much bacteria as the wisp of the planet Mars. Thus, practicality requires that Mrs. Spotiker nibble one bit at a time until she is assured that she will not perish from ingesting it too quickly. (p. 13) SAMPLE PAPER 8 Usually quotes within quotes use single quotation marks, but use double quotation marks for quotes within blocked quotes, since there are no other quotation marks included within. Also understand that direct quotes should be used sparingly in scholarly writing; paraphrasing is much preferred in APA format. Only use quotes when changing the wording would change the original author’s meaning. You cannot simply change one word and omit a second; if you
  • 15.
    paraphrase, the wordingmust be substantially different, but with the same meaning. Regardless, you would need to cite the resource you took this information from. Authors with more than one work published in the same year are distinguished by lower- case letters after the years, beginning with a. For example, Double (2008a) and Double (2008b) would refer to resources by the same author published in 2008. If there are two different authors with the same last name but different first names who published in the same year, include the first initials: Brown, J. (2009) and Brown, M. (2009). The names of journals, books, plays, and other long works, if mentioned in the body of the paper, are italicized in title case (4.21). Titles of articles, lectures, poems, chapters, website articles, and songs should be in title case, encapsulated by quotation marks (4.07). The year of publication should always follow the author(s)’s name, whether in narrative or parenthetical format: Perigogn and Brazel (2012) anticipated, or (Perigogn & Brazel, 2012). The page or paragraph number must follow after the direct quote. Second
  • 16.
    (2015) asserted that“paper planes can fly to the moon” (p. 13). You can restate that with a parenthetical citation as: “Paper planes can fly to the moon” (Second, 2015, p. 13). Citations in the body of the paper should include only the last names, unless you have two or more resources authored by individuals with the same last name in the same year, such as Brown, J. (2009) and Brown, M. (2009) mentioned above. Numbers one through nine must be SAMPLE PAPER 9 written out in word format, with some exceptions (such as ages—see section 4.32 on page 112 of your APA manual). Numbers 10 and up must be written out in numerical format: 4.31(a). Always write out in word format any number that begins a sentence: 4.32(a). Three or More Authors When referring to material that comes from three to five authors, include all of the authors’ last names in the first reference. Subsequently, use just the first author’s last name
  • 17.
    followed by thewords et al. (without italics). Et al. is a Latin abbreviation for et alii, meaning “and others,” which is why the word “al.” has a period, whereas “et” does not. Alone, Other, and Other (2011) stipulated that peacocks strut. The second time I refer to their material, I would apply APA’s rule (Alone et al., 2011). When a work has six or more authors, cite only the last name of the first author in the body of the paper, followed by et al., as if you had already cited all of the authors previously (Acworth et al., 2011). Note that I had not cited the Acworth et al. (2011) resource previously in this paper. For seven or fewer authors in the references, write out all of the authors’ last names with first- and middle initials, up to and including the seventh author. APA has a special rule for resources with eight or more authors: Write out the first six authors’ last names with initials, insert an ellipsis (…) in place of the ampersand (&), and finish it with the last name and initials of the last author. See the examples provided in the chart on page 177 (APA, 2010), as well as
  • 18.
    this paper’s referencelist for visuals of these variances (Acworth et al. 2011; Harold et al., 2014). Primary Sources versus Secondary Sources APA strongly advocates against using secondary sources; rather, it favors you finding and citing the original (primary) resource whenever possible (6.17, p. 178). On the rare occasion SAMPLE PAPER 10 that you do find it necessary to cite from a secondary source, both the primary (who said it) and secondary (where the quote or idea was mentioned) sources should be included in the in-text citation information. Only the secondary source should be listed in the reference section, however. Use “as cited in” (without the quotation marks) to indicate the secondary source. For example, James Morgan hinted that “goat milk makes the best ice cream” (as cited in Alone 2008, p. 117). Morgan is the primary source (he said it) and Alone is the secondary source (he quoted what Morgan said). Only the secondary source is listed
  • 19.
    in the referencesection (Alone, and not Morgan) because if readers want to confirm the quote, they know to go to page 117 of Alone’s book. Personal Communication and Classical Work Personal Communications The APA manual rationalizes the exclusion of references for information obtained through personal communication (such as an interview, email, telephone call, postcard, text message, or letter) in the reference list because your readers will not be able to go directly to those sources and verify the legitimacy of the material. Instead, these items are cited only in the body of the paper. You must include the individual’s first initial, his or her last name, the phrase “personal communication,” and the full date of such communication. As with other citations, such citations may be either narrative or parenthetical. For example, L. Applebaum advised him to dip pretzel rolls in cheese fondue (personal communication, July 13, 2015). The alternative is that he was advised to dip pretzel rolls in cheese fondue (L.
  • 20.
    Applebaum, personal communication, July13, 2015). Note that there is no entry for Applebaum in the reference list. SAMPLE PAPER 11 Classical Works Classical works, such as the Bible and ancient Greek or Roman works, are also cited in the body of the paper but not included in the reference list. If you use a direct quote, you must include the full name of the version or translation you quoted from the first time you quote from it, but then you do not name the version or translation again in subsequent quotes unless you change versions or translations (6.18, pp. 178-179). For example, Philippians 2:14 commands us to “Do everything without complaining and arguing” (New Living Translation). James 1:27 proclaims that “Pure and genuine religion in the sight of God the Father means caring for orphans and widows in their distress and refusing to let the world corrupt you.” Galatians 5:22 says that “the fruit of the Spirit is love, joy, peace, patience,
  • 21.
    kindness, goodness, faithfulness” (NewAmerican Standard). Note that there is no translation cited for the middle quote, since it was also taken from the NLT, which was specified in the immediately-preceding citation as well. Technically, it would not be necessary or proper to include any version when you paraphrase the Bible because all versions essentially say the same message in each verse, so a paraphrase of one would apply equally to all versions. However, the APA (2010) manual is not explicitly clear that this rule only applies to direct quotes, and for the sake of consistency and curbing confusion, the OWC has opted to advise students to include the version the first time, even for paraphrases. Lectures and PowerPoints Course or seminar handouts, lecture notes, and PowerPoint presentations are generally treated like personal communications unless they are published in material that can be readily retrieved by your audience, like on a public website. When citing a PowerPoint presentation, include the slide number rather than the page number. For purposes of LU course presentations
  • 22.
    and lectures, however(which are not readily available to the public), the OWC advises students SAMPLE PAPER 12 that there are two options. The first and more proper way is to cite it as a video lecture with the URL for the presentation, naming the presenter(s) in the author’s position. Many of LU's classes are set up through Apple's ITunes University---search for your course and find the specific video at http://www.liberty.edu/academics/cafe/bb/index.cfm?PID=2556 3. Brewers and Peters (2010) is an example. The second option, if you cannot find it on iTunes U, names the course number and enough details for others to identify it within that course, in a sort of book format, with the city, state, and publisher relating to LU. Peters (2012) is an example of this. You'll note that in this particular case, the iTunes U included information on a second author that was not readily identifiable in the Blackboard video itself. Usually, you will
  • 23.
    find the yearof publication in the closing screen at the end of the presentation. Dictionary Entries The proper format for citing and referencing word definitions from dictionaries differs from other citations and references because the word defined is used in the author’s position, followed by the year (if known, or n.d. if not known). This is followed by “In” and the name of the dictionary (i.e., Merriam Webster), and includes a URL to the webpage if searched online. If you used a hard copy book, include the standard city, state, and publisher details. The in-text citation in the body of the paper would also use the word searched in the author’s place, as well as the year: (Heuristic, n.d.). Exhaustive Samples Available For a chart of a myriad of different sources and how each is formatted in proper APA format, look for the “Downloadable version of the OWL Purdue information on APA citations” on Liberty University’s Online Writing Center’s “APA Formatting” webpage.
  • 24.
    http://www.liberty.edu/academics/cafe/bb/index.cfm?PID=2556 3 SAMPLE PAPER 13 ElectronicSources The APA, author of the APA manual, published a blog entry on how to cite documents found on the Internet (see http://blog.apastyle.org/apastyle/2010/11/how-to-cite- something-you- found-on-a-website-in-apa-style.html). It includes a .pdf chart with all the possible combinations, depending on what information you have or are missing. Use this for all online resources other than LU-course lectures. APA requires inclusion of a Digital Object Identifier (DOI) in the references whenever available. These should be denoted in lower case (doi). Note that there should be no punctuation after the doi in your reference list, and no space between the initials and the number itself. If you cite “Retrieved from” with a URL, note that APA (2010) does not include the date
  • 25.
    of retrieval “…unlessthe source material may change over time (e.g., Wikis)” (p. 192). Some of the hyperlinks in this paper are activated (showing blue, underlined text) for the purposes of visualization, but hyperlinks should be removed in scholarly papers --- and they should only appear in the reference list. To do this, right click the hyperlink in Microsoft Word and choose “remove hyperlink.” Like DOI’s, there should be no period after the URL. APA encourages breaking long URL’s with soft returns (hold down the Shift key and press the Enter key) at forward slashes, periods, or underscores to avoid unsightly gaps. You may have to remove multiple elements of the hyperlink that linger in those circumstances. Final Formatting Tweaks APA should be double-spaced throughout, with no extra spacing between lines. It should also include Times New Romans, 12-point font throughout. Sometimes when you format your paper or cut-and-paste material into it, things get skewed. One quick way to ensure that your paper appears correct in these regards is to do a final formatting
  • 26.
    tweak after youhave completed http://blog.apastyle.org/apastyle/2010/11/how-to-cite- something-you-found-on-a-website-in-apa-style.html http://blog.apastyle.org/apastyle/2010/11/how-to-cite- something-you-found-on-a-website-in-apa-style.html SAMPLE PAPER 14 your paper. Hold down the “Ctrl” button and press the “A” key, which selects and highlights all of the text in your paper. Then go to the Home tab in Microsoft Word and make sure that Times New Romans and 12-point font are selected in the Font box. Next, click on the arrow at the bottom of the Paragraph tab. Set your spacing before and after paragraphs to “0 pt” and click the “double” line spacing. If you are more advanced on the computer, you might consider changing the default settings in Word that create some of these formatting errors, but the steps listed here will correct them if you don’t have advanced word processing skills. Conclusion The conclusion to your paper should provide your readers with a concise summary of the
  • 27.
    main points ofyour paper (though not via cut-and-pasted sentences used above). It is a very important element, as it frames your whole ideology and gives your reader his or her last impression of your thoughts. After your conclusion, insert a page break at the end of the paper so that the reference list begins at the top of a new page. Do this by holding down the “Ctrl” key and then “Enter.” You will go to an entirely new page in order to start the reference list. The word “Reference” or “References” (not in quotation marks—for singular or multiple resources, respectively) should be centered, with no bolding or italics. Items in the reference list are presented alphabetically by the first author’s last name and are formatted with hanging indents (the second+ lines are indented 1/2” from the left margin). If you include a DOI or URL, be sure to remove the hyperlink as addressed above. One example of each of the primary types of resources will be included in the reference list, as cited in the body of paper, for illustrative purposes. Remember that, for purposes of this
  • 28.
    paper only, thesources cite in the body of the paper were provided for illustrative purposes only SAMPLE PAPER 15 and thus are fictional, so you will not be able to locate them if you searched online. Nevertheless, in keeping with APA style, all resources cited in the body of the paper are included in the reference list and vice versa (except for personal communications and classical works, per APA’s published exceptions). Be absolutely sure that every resource cited in the body of your paper is also included in your reference list (and vice versa), excepting only those resources with special rules, such as the Bible, classical works, and personal communications. The reference list in this paper will include a book by person(s), a book whose publisher is the same as the corporate author, a chapter in an edited book, a journal article, a webpage document, a resource with no author, a dictionary entry, one with no year of publication noted, two or more resources by the same author in the same year of
  • 29.
    publication (arranged alphabetically bythe first word in the title, but with the addition of letters in the year to distinguish which one you are referring to in the body of your paper), two or more resources by the same author in different years (arranged by date, with the earlier one first), resources with the same first author but differing others, a paper previously submitted by a student in a prior class, a resource with up to seven authors, and one with more than seven authors. Lastly, below are a few webpages that address critical topics, such as how to avoid plagiarism and how to write a research paper. Be sure to check out Liberty University’s Online Writing Center (http://www.liberty.edu/index.cfm?PID=17176) for more tips and tools, as well as its Facebook page (https://www.facebook.com/LibertyUniversityOWC/). Remember that these are only provided for your easy access and reference throughout this sample paper, but web links and URLs should never be included in the body of scholarly papers; just in the reference
  • 30.
    list. Writing aresearch paper (https://www.youtube.com/watch?v=zaa-PTexW2E or http://www.liberty.edu/index.cfm?PID=17176 https://www.facebook.com/LibertyUniversityOWC/ https://www.youtube.com/watch?v=zaa-PTexW2E SAMPLE PAPER 16 https://www.youtube.com/watch?v=KNT6w8t3zDY) and avoiding plagiarism (https://www.youtube.com/watch?v=VeCrUINa6nU). https://www.youtube.com/watch?v=KNT6w8t3zDY https://www.youtube.com/watch?v=VeCrUINa6nU SAMPLE PAPER 17 References Acworth, A., Broad, P., Callum, M., Drought, J., Edwards, K., Fallow, P., & Gould, P. (2011). The emphasis of the day. Melville, PA: Strouthworks. 1 Allen, B., Bacon, P., & Paul, M. (2011). Pericles and the giant. The Journal of Namesakes, 12(8), 13-18. doi:001.118.13601572 2 Alone, A. (2008). This author wrote a book by himself. New York, NY: Herald. 3
  • 31.
    Alone, A., Other,B., & Other, C. (2011). He wrote a book with others, too: Arrange alphabetically with the sole author first, then the others. New York, NY: Herald. 4 American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author. 5 Brewers, G., & Peters, C. (2010). Defining integration: Key concepts [video lecture]. Retrieved from https://itunes.apple.com/us/podcast/introduction-to- integration/id427907777?i= 92371729&mt=2 6 Brown, J. (2009). Ardent anteaters. Merill, NJ: Brockton Publishers. Brown, M. (2009). Capricious as a verb. Journal of Grammatical Elements, 28(6), 11-12. 7 Carlisle, M. A. (n.d.). Erin and the perfect pitch. Journal of Music, 21(3), 16-17. Retrieved from http://make-sure-it-goes-to-the-exact-webpage-of-the-source- otherwise-don’t-include 8 1 Resource with seven authors (maximum allowed by APA before special rule applies). 2 Typical journal article with doi.
  • 32.
    3 Entry byauthor who also appears as one of many authors in another resource (single author appears first in list) 4 Multiple authors appear after same single-author resource. 5 Resource with corporate author as publisher. 6 LU video lecture using iTunes U details. 7 Resources by two authors with the same last name but different first names in the same year of publication. Arrange alphabetically by the first initials. 8 Resource with no publishing date, and url. SAMPLE PAPER 18 Double, C. (2008a). This is arranged alphabetically by the name of the title. Banks, MN: Peters. Double, C. (2008b). This is the second (“the” comes after “arranged”). Banks, MN: Peters. 9 Harold, P., Maynard, M., Nixon, L., Owen, C., Powell, C., Quintin, J., … Raynard, A. (2014). Apricot jam: A sign of the times. Endicott, NY: Peace & Hope. 10 Heuristic. (n.d.). In Merriam-Webster’s online dictionary (11th ed.). Retrieved from http://www.m-w.com/dictionary/heuristic. 11
  • 33.
    Liberty University. (2015).The online writing center. Retrieved from https://www.liberty.edu/index.cfm?PID=17176 12 Owen, C. (2012). Behavioral issues resulting from attachment disorders have spiritual implications. Unpublished Manuscript: COUN502. Liberty University. 13 Perigogn, A. U., & Brazel, P. L. (2012). Captain of the ship. In J. L. Auger (Ed.) Wake up in the dark (pp. 108-121). Boston, MA: Shawshank Publications. 14 Peters, C. (2012). Counseling 506, Week One, Lecture Two: Defining integration: Key concepts. Lynchburg, VA: Liberty University Online. 15 Prayer. (2015). Retrieved from http://www.exact-webpage. 16 Second, M. P. (2011). Same author arranged by date (earlier first). Journal Name, 8, 12-13. Second, M. P. (2015). Remember that earlier date goes first. Journal Name, 11(1), 18. 17 9 Two resources by same author in the same year. Arrange alphabetically by the title and then add lowercase letters (a and b, respectively here) to the year. 10 Resource with eight or more authors. Note the ellipse (…)
  • 34.
    in place ofthe ampersand (&). 11 Dictionary entry. 12 Online webpage with url. 13 Citing a student’s paper submitted in a prior class, in order to avoid self-plagiarism. 14 Chapter from an edited book. 15 LU class lecture using course details rather than iTunes U. 16 Online resource with no named author. Title of webpage is in the author’s place. 17 Two resources by the same author, in different years. Arrange by the earlier year first. JOURNAL OF RESEARCH ON TECHNOLOGY IN EDUCATION, J9(4), 331-357 Examining the Development of a Hybrid Degree Program: Using Student and Instructor Data to Inform Decision-Making Audrey Amrein-Beardsley, Teresa S. Foulger, and Meredith Toth Arizona Stale University
  • 35.
    Abstract This paper investigatesthe qtiestions and considerations that should be discussed by administrators, faculty, and support staff when designing, developing and offering a hybrid (part online, part face-to-face) degree program. Using two Web questionnaires, data were gathered from nine instructors and approximately 450 students to evaluate student and instructor perceptions and opinions of hybrid instruction and activities. In comparison to prior research, the results of this study offer larger and more significant policy and programmatic implications for degrees based on the hybrid format, including instructional technology training and support for students and instructors, creation of common class procedures and expectations, and development of consistent schedules that maximize benefit and flexibility for stttdents and instructors. (Keywords: hybrid, online, degree program, communities of practice, teacher education, organizational change.) INTRODUCTION While online learning has become the focus of much research and debate regarding its efficacy in meeting or exceeding student learning outcomes (Neuhauser, 2002; Russell, 1999; Skylar, Higgins, Boone, Jones, Pierce, & Gelfer, 2005; Summers, Waigandt, & Whittaker, 2005), hybrid courses have been largely treated as a subset of distance education and are seldom examined as a unique method of course delivery. Due to the
  • 36.
    development of readily availabletechnologies, the potential of hybrid instruction as a model that combines these new technological applications with more traditional approaches to education has been recognized (Anastasiades & Retalis, 2001). While literature exists evaluating online courses (Benbunan- Fich & Hiltz, 2003; DeTure, 2004; Overbaugh & Lin, 2006), online degree programs (Benson, 2003; Snell & Penn, 2005; Wilke & Vinton, 2006), and hybrid courses (Donnelly, 2006; Leh, 2002; Riffell & Sibley, 2005), little has been published specific to the design opportunities made available by hybrid degree programs. Recent studies by the National Center for Education Statistics (Waits & Lewis, 2003) and The Sloan Consortium (Allen & Seaman, 2006) show a growing appeal and acceptance of online learning. However, little is understood about effective program design when multiple courses are linked in a formal degree program. Drawn by the appeal of a model that combines the flexibility of online learning with the benefits of in-class meetings and activities, a teacher education college in a university in the southwest United States chose to investigate Journal of Research on Technology in Education 331
  • 37.
    the hybrid modelas a new delivery method for its teacher preparation undergraduate degree program. Utilizing a survey research, mixed-methods approach, this study was largely exploratory in nature and sought to answer the following research question: What policy and programmatic issues should be discussed by administrators, faculty, and support staff when designing, developing and offering a hybrid degree program? Through an analysis of student and instructor perceptions of hybrid course design and instruction coupled with administrative directives, the researchers sought to understand the concerns of each group. This study documents the knowledge brokered between students, instructors and administrators, and provides information to stakeholders that will inform degree program decisions and promote common practices across classes. LITERATURE REVIEW Compared to other areas of education research, the field of online learning is still relatively new, and consistent definitions or methods of categorization have yet to be established. Classifications of online learning vary in a number of ways, such as the technologies employed (Garrison, 1985),
  • 38.
    teaching and learning methods(Misko, 1994), pedagogical approaches (Dziuban, Hartman & Moskal, 2004), and where the design lies on the continuum from fully face- to-face to fully online (Allen & Seaman, 2005; Twigg, 2003). Some scholars do not draw such clear distinctions and instead describe as "hybrid" any course that combines traditional face-to-face instruction with online technologies (Swenson & Evans, 2003). For the purposes of this study, the researchers use the hybrid terminology already in use by our university administration. This definition aligns with that of the Sloan Consortium (Allen &: Seaman, 2006) as a delivery method that blends face-to-face and online instruction. More particularly, it aligns with Twigg's hybrid model, which offers a more specific definition referring to the "replacement" of traditional class time with out-of-class activities such as Web-based resources, interactive tutorials and exercises, computerized quizzes, technology-based materials, and technology-based instruction (Twigg, 1999). To facilitate the transition from traditional face-to-face to hybrid courses, Aycock, Garnham, and Kaleta (2002) recommend instructors start small by redesigning an activity or unit of a course, then augment the process in
  • 39.
    subsequent semesters. Whenmultiple hybrid courses are fully implemented, the hybrid degree program will accommodate the needs of today's students by offering a program that is accessible and flexible (Bonk, Olson, Wisher, & Orvis, 2002; Graham, Allen, & Ure, 2003; Sikora, 2002). This is particularly relevant when students taking multiple courses in a given semester attempt to schedule classes and internships in ways that support demands on their time. Over the last several decades, most research on courses that blend face-to-face and technology-mediated instruction has focused on the way technologies such as audio recordings (LaRose, Gregg, & Eastin, 1998), television (Machtmes & Asher, 2000), computer conferencing (Cheng, Lehman, & Armstrong, 1991), or course management systems (Summers, Waigandt, & Whittaker, 332 Summer 2007: Volume 39 Number 4 2005) can be used to provide instruction as effective as that of a traditional face-to-face classrooms. Literature specific to hybrid courses has followed this trend and also reveals an emphasis on student achievement (Boyle, Bradley, Chalk, Jones, & Pickard, 2003; McCray, 2000; Olapiriyakul & Scher, 2006;
  • 40.
    O'Toole & Absalom,2003) or the affective factors most valued by students or instructors in hybrid courses (Ausburn, 2004; Bailey & Morais, 2004; Parkinson, Greene, Kim & Marioni, 2003; Woods, Baker, & Hopper, 2004). More recently, attention has shifted from the technology itself to an emphasis on the pedagogical approaches that should lead the way (Bennett & Green, 2001; Buckley, 2002; Reeves, Herrington, & Oliver, 2004; Twigg, 2001). Adding online technologies complicates instruction. Quality online instruction must incorporate learning theory and practices from traditional face-to-face courses as well as effective pedagogical use of technology (Yang & Cornelious, 2004). Since instructors rely on a number of factors to accomplish their programmatic goals, those that contribute to successful instructional design and delivery are difficult to pinpoint in degree programs, whether online, hybrid, or face-to-face (Moore, 1993). Yet, if institutions interested in exploring hybrid delivery focus only on the design and delivery of individual course offerings, problems such as disjointedness, a lack of "program" focus, and overall poor quality can arise from neglecting to examine the program as a whole (Husmann & Miller, 2001). Limited knowledge is available regarding the
  • 41.
    programmatic implications of hybriddesign (Phipps & Merisotis, 1999), the focus of this study. As allies in the learning process, faculty and administrators must take time to identify the factors influencing student satisfaction, adapt coarse design and structure to meet diverse student needs, and actively engage in the learning process with students (Young, 2006). The present study seeks to fill this gap in the literature by understanding administrative directives and gathering input from student and instructor communities to identify the larger and more significant policy and programmatic implications related to designing and developing hybrid degree programs. THEORETICAL ERAMEWORK Participation in Communities of Practice Within any organization, groups of people associated with a common practice naturally come together to share success and failures and brainstorm new ideas. This is a naturally occurring phenomenon of a healthy system (Wenger, 1998). Rogers (2002) observed that although opportunities for individualized learning are increasing, there are significant advantages to group learning. Although struggles are more likely to arise within groups and group work requires certain levels of maturity among participants (Goleman, 1995;
  • 42.
    Mezirow, 2000), there aredefinite advantages for groups in the learning process, including (a) groups can provide a supportive environment, (b) groups create challenges unavailable in isolated learning situations, (c) groups build more complex cognitive structures due to the representation of a variety of experiences, and (d) groups are dynamic and can become a community of practice as they draw in members (Rogers, 2002). Journal of Research on Technology in Education 333 The Communities of Practice learning theory (CoP) encompasses these elements of collaboration within groups and organizational systems. In a single CoP, members represent unique experiences and knowledge, but unite for the purpose of improving their common practice. These collaborative experiences form naturally based on the needs of the participants (Sumsion &C Patterson, 2004). Once formed, the participants develop ways of maintaining connections within and beyond their community boundaries (Sherer, Shea, & Kristensen, 2003). Constituencies outside the CoP might include those at various levels within the organization, some outside of the organization, and newcomers attempting to enter the CoP. When individuals are
  • 43.
    involved in multiple CoPs,transfer of knowledge from one CoP to the other can occur. It is difficult, however, for newcomers in unfamiliar communities to understand the community workings as fully as long-standing members (Brown & Duguid, 2000; Lave & Wenger, 1991; Wenger, 1998). Boundary Brokers and Trajectories In some cases, CoP members can take on the role of boundary brokers to expedite organizational change (Sherer, Shea & Kristensen, 2003). When members of a community exist on the periphery and broker information with another CoP, a boundary trajectory occurs (Wenger, McDermott, & Snyder, 2002). In such cases, the links between the CoPs cause boundaries to expand and create a practical mechanism for greater understanding between communities (Iverson & McPhee, 2002). In this way, boundary brokers seamlessly expand access to resources within relevant communities (Sherer, Shea, & PCristensen, 2003), especially in organizations that nurture membership in multiple communities (Kuhn, 2002). However, it is a very delicate challenge to sustain an identity in this type of social setting, as those who translate, coordinate, and align perspectives through ties to multiple communities must be able to legitimately influence the "development of a practice,
  • 44.
    mobilize attention, and addressconflicting interests" (Kuhn, 2002, p. 109). Although organizations can support infrastructural investment for CoPs, CoPs fijnction best when members engage in authentic interactions and negotiations based on the needs of the members. These needs bring them together in a meaningful way surrounding their individual identities, roles, intentions, realities, and agendas (Thompson, 2005). This balance between administrative or professional development forces and the organic needs of members that choose to engage in the inquiry process reaffirms the need for a professional development environment that embraces CoP functions and empowers CoP members (Cousin &: Deepwell, 2005; Foulger, 2005; Thompson, 2005). Situating This Study As part of a college initiative to explore new modes of delivering degree programs, the college dean approached the Elementary Education department chair (the largest department in the college) and one technology instructor with the charge of creating capacity" to offer online courses. To develop and evaluate the courses, the technology instructor solicited guidance from
  • 45.
    334 Summer 2007:Volume 39 Number 4 / Student CVwiifniifiity V "-< Instructor CoP --- --' dmintfiRtTiition CoP j BOUNDARY BROB^ERING Figure I. Findings from this study were drawn from the convergence of student, instructor, and administrator perspectives. information technology administrators, instructional design support personnel, college administrators, department chairs, instructors, and students. After consulting with these stakeholders, the college offered a two- day intensive seminar on designing and developing hybrid courses.
  • 46.
    Sixteen instructors, includingthe Elementary Education department chair, volunteered to participate in the hands-on seminar and redesign a two-week component of one of their face-to-face courses as a hybrid unit offered half online and half face-to-face. All of the instructors were proficient with online technology tools and received additional training in hybrid course design and instruction, but they had never taught online before. I h e instructors collaborated to redesign their units using asynchronous technologies that employed Blackboard tools and methods (Blackboard, version 6.2, the university-sponsored course management system). Because communities of practice are not necessarily fixed systems, and because each interaction among members has a multitude of influences (Wenger, 1998), a prescriptive vision for the hybrid program could not be determined at the conception of this hybrid investigation. This lack of rigidity was embraced by instructors participating in the study. From the CoP perspective, the hybrid instructors in this study negotiated a balance between the identities associated with three specific social forces (see Figure I). The following issues were expressed prior to the beginning of this study and were used to inform the development of the hybrid design:
  • 47.
    • Administration Communityof Practice: Administrators were most concerned with decreasing use of classroom space, providing training and Journal of Research on Technology in Education 335 support to hybrid instructors, and creating incentives for participation. Instructors served as peripheral participants and advisors to the Administrative CoP at the onset of the study by communicating the need to develop policies and procedures supportive to the transformation of a face-to-face to hybrid degree program. • Hybrid Instructor Community of Practice: Teacher education instructors who elected to redesign a previously-taught course into a hybrid course were initially concerned with maintaining high standards and student accountability, assuring that technology would be used to enhance instruction, and understanding which activities were best suited for face- to-face or online environments. • Hybrid Student Community: Instructors initially knew very little about the student perspective. However, they realized the importance of brokering knowledge from the student community as a way to
  • 48.
    understand their perspectiveand use the information to influence instructor and administrative decisions. As the college devised initial plans for the development of the hybrid program and began implementation, purposefully exchanging information between these three critical stakeholder groups led to a greater understanding of the realities of each group. These initial conversations brought about a broader understanding of the contributing practices of administrators and instructors believed to be critical for student success in the hybrid degree program. Through the methods employed in this study, the researchers probed the instructor and student CoPs more deeply to determine the most effective practices and how this knowledge could inform the administrative CoP to advance the hybrid program. METHODS Data reported in this study were collected from instructors and students as they experienced the college's first attempt at transforming traditionally face-to- face instruction to a hybrid format. Instructor Sample After completing the seminar on hybrid course design and instruction, nine of the 16 instructor participants (56%) committed to teaching
  • 49.
    their hybrid unit thefollowing semester. At the conclusion of their units, all nine instructor participants completed the online Instructor Hybrid Evaluation Questionnaire (see Appendix), designed to capture instructors' perceptions of their students' and their own experiences with the hybrid unit. One instructor completed the questionnaire twice for two different courses (response rate = 100%). Student Sample Following the directions of the primary researchers in this study, instructor participants distributed the online Student Hybrid Evaluation Questionnaire (see Appendix) to their students who participated in their hybrid unit of instruction. To assure a high response rate, each instructor solicited participation directly from their students by explaining to students that their 336 Summer 2007: Volume 39 Number 4 feedback would help improve the overall program, particularly for fliture students. Each of the nine instructors distributed the questionnaire directly to their students. Some students participated in more than one course where hybrid units were offered; these students were
  • 50.
    encouraged to takethe questionnaire multiple times based on their unique experiences in each course. In cases where the relative response rate was of concern, students were sent one reminder to participate. A total of 413 out of approximately 450 students completed the online questionnaire (response rate = 92%). The high response rate is probably due to the fact that students completed the anonymous online questionnaire during normal class time or were held accountable for their participation, predominantly through class credit. Instrument Rather than examining success factors for students in these courses, two complimentary Web questionnaires were designed to gather information regarding student and instructor perspectives of the hybrid instruction and activities, the hybrid degree program, and course planning and design (Benson, 2002). Similar questionnaire forms allowed for comparative analyses between instructor and student participants and more holistic analyses across groups. Part I of both the instructor and student questionnaires collected general demographic, technology access, and course and programmatic information.
  • 51.
    Part II presentedinstructors and students with a list of technology tools provided within Blackboard. If tools were used, instructors and students were asked to respond to Likert-type items indicating the extent to which the tools enhanced a) the instructor participants' perceived abilities to provide quality instruction and b) the student participants' perceived abilities to learn. Part III, Section 1 asked instructors and students to indicate their levels of agreement with statements about affective factors of hybrid instruction. This section was adapted from materials provided online as part of the Hybrid Course Project at the University of Wisconsin- Milwaukee (Learning Technology Center, 2002). To encourage students and instructors to read and reflect on each statement and decrease the likelihood that they would select the same value for continuous items, positive and negative statements were placed in a randomized sequence. Part III, Section 2 asked instructors and students to indicate their overall levels of agreement regarding face-to- face and online environments. Part IV asked students and instructors to provide insights they thought would be useful to instructors and the college regarding online activities, hybrid course development, and hybrid degree program development.
  • 52.
    Instrument Internal-Consistency Reliability Estimatesof reliability were calculated for each section of the student and instructor Web questionnaires. Coefficient-alpha estimates of internal- consistency reliability were computed for Parts II and III (Cronbach, 1951). Coefficient-alpha estimates for the positive and negative statements built into Journal of Research on Technology in Education Table 1: Coefficient Alpha Estimates of Reliability Part II: Part III Factors Part III Factors Blackboard Tools , Section 1: Affective and Personal , Section 2: Overall Agreeability Student Web Questionnaire 0.724
  • 53.
    0.718 0.853 Instructor Web Questionnaire 0.791 0.828 0.744 Part III,Section 1 were adjusted so that responses could be interpreted on the same scale, and inversely related estimates would not cancel each other out. All sections of the Web questionnaires yielded acceptable alpha levels (see Table 1 for coefficient-alpha levels of both instruments) and warranted their use for the purposes of this research study. Values below .70 are oft:en considered unacceptable (Nunnally, 1978). Methods of Data Analysis Frequency statistics were used to analyze each demographic, course, and programmatic question in Part I of both Web questionnaires. For Parts II and III, descriptive statistics were calculated using participant responses to the Likert items, and means were rank ordered to illustrate levels of participant
  • 54.
    agreement per item.T-tests using independent samples were also used to test for significant differences between the opinions of instructor- and student- participant groups. Participant responses to the open-ended, free-response items in Part IV were read, coded, and reread, and emergent themes were categorized into bins (Miles & Huberman, 1994). Once bins became focused and mutually exclusive in nature, the items included within each bin were collapsed into categories, quantified, and labeled. Overall themes were validated by instructor participants during a focus group conducted by the researcher participants, and the themes were left: intact, without any additions or deletions. These themes will be discussed further in the Implications section of this study. RESULTS Part I: Demographic Information and Technology Access In Part I of the Web questionnaire investigators gathered demographic, technology access, and course and programmatic information from student and instructor participants. More than 60% of student participants primarily used a personal desktop computer to complete coursework. About 20% of student participants used portable laptops, and 10% completed online lessons
  • 55.
    and assignments oncampus at the student computer center or the library. Approximately 90% of student participants accessed the Internet through a high-speed connection, while about 10% relied on dial-up networks. Students reported that an average of 3.7 of their courses (out of a maximum of five courses students may take each semester) involved some hybrid 338 Summer 2007: Volume 39 Number 4 Online gradebook Course document downloads Internet sites/links E-maii between instructor and student E-mail between student and student SmatI gnsup discussion board Full class discussion board Online assignment submission Online Qutzzes/Tests
  • 56.
    Digital drop box - •̂1 .0 ^^ 122 •̂1 12,/ 4 I!? 35 4 136 13.3 3 0 4 |;!8 0.0 0 5 1.0 1.5 2.0 2 5 3 0 3 5 4 0 4 5 5 0 ]
  • 57.
    Figure 2. Blackboardtools ranked by students and instructors from, most to least useful. component during the semester of study. Instructor participants indicated that they replaced an average of six face-to-face classes (out of approximately thirty total instructional days) with online instruction. The total number of face-to- face days replaced with online instruction ranged from a low of two to a high of 10 days. Part II: Student and Instructor Perceptions of Blackboard Learning Tools In Part II of the Web questionnaire, student and instructor participants identified the Blackboard tools they found most and least useful in terms of enhancing student learning in the hybrid format. The closer each item mean is to 5, the more the student or instructor participants agreed with each statement. For the purposes of this study, the results from this section are used to provide larger programmatic considerations and recommendations (see Figure 2). Of the Blackboard tools identified in the Web questionnaire, students found the online grade book and announcements most useful. Students appreciated
  • 58.
    instructors who gradedassignments and posted them in the grade book in a timely and efficient manner and criticized instructors who did not use the grade book effectively or did not post grades soon after reviewing student work. Students appreciated that they could monitor their progress in courses using the grade book and thought that more college instructors should use the tool. Journal of Research on Technology in Education 339 Although students appreciated the use of announcements, almost 50% of student participants expressed a need for instructors to be consistent with announcement frequency and to provide clear and simple written information. Students also requested that instructors e-mail students after posting an announcement, particularly if announcements are not used as part of the normal class routine. Students found the course document downloads, Internet sites and links, and e-mails sent to them from the instructor equally useful in terms of technology tools that enhanced their learning. Some students expressed concern regarding their ability to find or download course documents and others had difficulty visiting and spending time on Internet sites if they had only
  • 59.
    dial-up access. Students appreciatedwhen instructors e-mailed them to clarify components of the coursework and most appreciated instructors who responded to student e-mails in a friendly, "timely" manner. Students were very critical of instructors who did not respond to student e-mails in a "timely" manner, responded in an unfriendly manner, or did not respond at all. Students questioned whether instructors who do not respond to e-mails in such a manner should be implementing online activities in their courses. Because students do not meet as often in a hybrid setting, the primary communication method between students and instructors is e-mail. When instructors did not respond in a timely manner, students expressed high levels of frustration and outright anger. In general, students felt that discussion boards were more useful than in- class discussions because students could take their time to compose a response, students were required to participate online while they were not required to participate in face-to-face discussions, and students who normally do not participate in class were not as reluctant to express an opinion online. Students also found small-group discussion boards to be particularly useful when quizzes and tests required them to use the knowledge gained from such discussions.
  • 60.
    Despite these benefits,students felt that discussion board assignments sometimes became redundant, were not always useful, and sometimes detracted from more important course activities or assignments. Instructors disagreed with their students in two ways. First, instructors found the Internet sites and links and the full class discussion board to be significantly more useful (p < .05) than their students found these technology features. Second, instructors found student-to-student e-mail, online assignment submissions, course document downloads, small group discussion boards, and online quizzes and tests as significandy less useful (p < .05) than their students found these technology tools. Part III: Student and Instructor Responses to Affective Items In Part III of the Web questionnaire, student and instructor participants indicated their level of agreement with thirteen affective statements about hybrid instruction. The closer each item mean is to 5, the more the student or instructor participants agreed with each statement (see Figures 3, 4 and 5). Of the first 10 statements (Section 1), five were written in a favorable vernacular and five were written in an unfavorable vernacular. For this reason,
  • 61.
    340 Summer 2007:Volume 39 Number 4 Because of Ihe online oonrponents in tiis course. I was (rry students vwre) betsf able k> balance nv (their) couneMKk Mth olher home s in tijs course helped me (them) leam more about t i e I h e tochmlogy U5K) enhanced iTV (nv students) understanding of the ccurseworti ltbundthatlv«Bs(nv students M r e ) able to conW tie paoe of nv (tieir) teaming rrore efle(*/Bly because of the May Ihis course used online t o l s I (My student5) Ibund that I ves (they were) beilBf able to dewtcp m/ (tierr) conmjnicatKxi slqlls because of Ihe technology tools used. Figure 3. Instructor and student responses to favorable, affective questions.
  • 62.
    results have beensplit into two sections and ranked from high to low levels of agreement. Students agreed that the online components of their classes helped them balance their coursework with other home and/or work responsibilities and learn more about subject matter. Students most disagreed that they had to spend too much time trying to get access to a computer to do the coursework effectively, and that they were at a disadvantage because they did not understand how to use the technology tools as well as the other students. If the response rate had been lower, use of a Web questionnaire might suggest that students with technology issues were underrepresented in the sample of students who participated; however, this was not the case. Students were most ambivalent (mean = 2.5) towards whether online learning was better than learning in a face-to-face environment. Instructors viewed the impact of online instruction on their students' learning significantly more favorably than did their students. Instructors were significantly more concerned than students with whether some students were disadvantaged by a lack of technology skills. Instructors were significantly less concerned than students with whether the time spent online
  • 63.
    would have been betterspent in the classroom and whether online experiences made students feel less connected with their instructors (p < .05). Part III, Section 2 included three overarching, open-ended questions designed to capture student and instructor participants' overall opinions and suggestions Journal of Research on Technology in Education 341 The tme i (my students) spent oniine MHjkl have been beOer spent in ihe dassroom •Rie technology tools made me (my stu<Jent5) feel less connecW wth the instructor (me as their instructor) The technology tools made me (my students) *Bei less osnnected with Ihe oiher students in this course I felt tiat I was (some students were) at a disadvantage in Ihis course because I (Ihey) didni understand how to use the technology tools as well as other students
  • 64.
    I (My students)had to spend too mudi time trying to get acoess to a a)mputer to do the coursevwrk etfectiwiy. Figure 4. Instructor and student responses to unfavorable, affective questions. regarding hybrid instruction. Each item mean is illustrated. The closer each mean is to 5, the more the student or instructor participants agreed with each statement. Overall, students and instructors agreed that it would be a good idea if the entire teacher education program involved face-to-face and online activities and if other courses incorporated more online activities. They also believed that the content of the courses was well suited for a combination of face-to-face and online activities. Instructors agreed at higher levels, but students and instructors ranked the three statements in the same order by similar levels of agreement. Part IV: Student Responses to Open-Ended Questions In Part IV of the Web questionnaire, student and instructor participants were asked to provide information or insights they thought would be useful to instructors and the college regarding online activities and hybrid course
  • 65.
    development. In response tothe request for information or insights they thought would be useful to their instructors regarding hybrid activities, student participants responded with enthusiasm for increasing hybrid courses across the college, with the stipulation that the hybrid components be beneficial to students and that assignments be of reasonable length and pertinent to the students' professional development. Students requested that instructors plan online/in- class schedules in collaboration with other instructors to maximize fiexibility 342 Summer 2007: Volume 39 Number 4 O«rall.llhink the content of this course is well-suited for a a)mbinabon of boe-to-face and oriiineactivttei Overall. I timk it would be a good idea if olher courses would incorporate more online advities Overall.lMnkitwwIdbeagood idea if Ihe entire program i n w l ^ face-to-lace and online activites
  • 66.
    45 0.0 0.5 101.5 2.0 25 3.0 35 4.0 45 50 oinalructors Figure 5. Instructor and student responses to overarching, open- ended questions. and minimize confusion. In addition, students felt that the online/in-class schedule should be organized and disclosed to students at the outset of a course so they would have the opportunity to opt out of a course with online components when scheduling their semesters. In addition, students expressed frustrations with some technologies (such as trial software) they felt compromised their opportunities to succeed in an online learning environment. Instructor participants suggested that all instructors hold students accountable for the online work associated with any given course while maintaining a certain degree of flexibility, especially given students' busy schedules and the challenges they might face in learning new technologies. Instructors also noted that hybrid activities should not create additional work for students, but should replace less valuable work normally conducted in a face-to-face setting. Finally, instructors recommended that all instructors be clear, organized, responsive,
  • 67.
    and timely whenresponding to e-mail and other student communications, such as discussion boards. The Web questionnaire also prompted students for information or insights they thought would be useful to the college regarding hybrid or online activities. A strong majority of students responded favorably towards hybrid instruction, but stated the college should proceed with caution. Approximately 10% of student participants did not encourage the college to offer more hybrid Journal of Research on Technology in Education 343 courses or activities. This group of students felt that face-to- face interaction, rather than some online and some face-to-face interaction, was more conducive to their learning. These students also expressed frustrations that they were not made aware of the online components before opting in to the course(s). In general, student respondents thought that college instructors should not implement online activities without first obtaining the skills to teach in an online environment, committing to respond to students in a timely manner, and organizing their materials in a way that is conducive to online instruction.
  • 68.
    All instructor participantscommended the college on its exploration of a hybrid degree program and recommended that as the college progresses, evaluative efforts continue in order to ensure that hybrid instruction is implemented in a way that best benefits student learning. Instructors also requested that more training opportunities be made available to help them use existing tools, integrate online activities, and effectively collaborate with each other. IMPLICATIONS During the process of reading, coding, and identifying emergent themes representing the three community perspectives, several categories of programmatic issues were noted as factors contributing to the success of the hybrid program. When these issues and implications were reviewed with instructor participants during a focus group, the instructor participants validated the implications and the identified themes were left intact. These implications are programmatic in nature and mostly address the administration, yet they impact the different identities within the hybrid degree program community. Addressing these recommendations will affect the success of instructor course design and student learning.
  • 69.
    Develop Program PolicySupportive to Teaching and Learning in Hybrid Courses When registering for courses, students were not informed that some course materials, activities, and assignments would be delivered online. Some students adjusted well to the hybrid delivery method, but others expressed frustration with the unexpected technology requirements and non- traditional instructional methods. With the help of administrators, the researchers made use of a course catalog footnote and existing Web site that alerts students that they are signing up for a hybrid course and explains how these courses differ from more traditional face-to-face classes. It is our recommendation that when developing and promoting a hybrid degree program, expectations, instructional and communication methods, technical requirements, and benefits of combining the face-to- face and online learning environments be fully communicated to students prior to registration. Students can then make an informed decision as to whether the hybrid format meets their particular learning styles and preferences, schedule, and other needs. This communication could take place by providing information about the hybrid degree program in college marketing material, during advising and
  • 70.
    344 Summer 2007:Volume 39 Number 4 registration sessions, and in program or course orientations. In such a manner, instructors and students will have common understandings regarding course design and expectations, and students not wanting to participate may opt out of such courses. Support the Creation of Common Procedures and Expectations across Courses When the hybrid units were developed for this study, instructors for each of the courses did not collaborate to develop common class or instructional procedures. In some cases inconsistencies from course to course caused student confusion and frustration. It is important to remember the student perspective when developing a hybrid program. Some common elements across courses could positively impact student understanding and feasibility. Instructor CoPs should be encouraged to discuss their class procedures and expectations in order to develop common procedures. This is not to say that all instructors should have identical procedures, but that collaboration for the purpose of
  • 71.
    creating some level ofconsistency will benefit students. Common procedures and expectations could be developed related to e-mail/discussion board use, netiquette, use of course announcements, how to handle a technology snow day (Hitch, 2002), technology assistance, method for instructor contact, frequency and deadlines for discussion board posts, mechanisms for work submission, etc. Allocate Face-to-Face and Online Time across Courses Most of the students participating in this study enrolled in more than one course that used a hybrid format. Because the hybrid units did not fall in the same time period during the semester, student schedules were not consistent from week to week, causing frequent confusion and aggravation. Using student feedback, instructors worked with administrators to standardize Wednesday and Thursday as face-to-face days, leaving Monday, Tuesday and Friday free for student teaching, internships, and other student activities. This simple solution provided more structure for students and less confusion across courses within the same semester. Although face-to-face and online activities should best fit the needs of a particular subject area and course (Veronikas & Shaughnessy, 2004),
  • 72.
    this study suggeststhat faculty and administrative CoPs work together to coordinate a schedule that outlines specific face-to-face and online days that will accommodate students taking multiple hybrid classes in the program. Maximum flexibility for students will occur when all courses in a given semester follow a similar or complimentary pattern of online and face-to- face days. Support Instructor CoPs as they Refine and Adopt Technology Tools All instructor participants in this study received a basic overview of online technologies during a summer workshop on designing and developing hybrid courses. Still, instructors found it difficult to gain an in-depth working knowledge of the online tools and features commonly associated with online Journal of Research on Technology in Education 345 instruction. The design of activities was inhibited by their limited knowledge and familiarity with the available tools. Collaborative conversations within instructor CoPs about the functions and features of online tools appeared to increase the sophistication of technology use and instructional design.
  • 73.
    Students participating inthe study clearly articulated their preferences toward certain instructional practices and activities. It was evident that students preferred more simplistic methods of delivery (instructor presentations available for effortless download), online interactions (straightforward discussion boards), and ease in work submission. Instructor CoPs should discuss the use of technology tools to support specific learning needs, but technology that does not enhance instruction should be reduced or eliminated. As instructors within a CoP learn about technology tools and their instructional uses, they will develop activities that incorporate the best of both face-to-face and online delivery methods. A supportive environment conducive to exploration, collaboration and cooperation will result in instructionally- sound activities and shared practices which will contribute to the overall quality of the program. To support this professional development and growth among hybrid instructors, administrators should provide mechanisms for faculty to collaborate within their CoP and interact with others outside their CoP, including instructional designers and technology support staff. Provide Instructional Design Training and Support for Instructors The online questionnaire used in this study prompted instructors to refiect
  • 74.
    on their hybridunits and identify successes as well as areas for improvement. The resulting data prompted the need for further professional development opportunities related to technology tools and delivery options. Becoming a good hybrid instructor is a developmental process and requires continual nurturing and support in terms of the additional time it takes to develop and teach a hybrid course, as well as the adjustment to delivering materials, interacting with students, and designing activities for a Web-based environment (Kincannon, 2000). When asking instructors to redesign a course as a hybrid, administrators should recognize that this design and development process is akin to developing a new course, and instructors will likely need technology training. As such, administrators need to support the professional development of instructors. This can take place in many ways, including providing adequate time over the course of several semesters to collaborate with other hybrid instructors, instructional designers, experienced colleagues, technology trainers and other personnel; soliciting help from other instructors or institutions who have more experience; providing hands-on training opportunities or one-on- one tutoring; and providing opportunities for instructors to
  • 75.
    share their successes witheach other. Provide Support for Students to Gain New Skills Anecdotal evidence gathered during this study indicated that many students sought help from one another, upgraded from dial-up to faster Internet 346 Summer 2007: Volume 39 Number 4 connections at home, accessed the wireless networks on campus via laptops, purchased home computers or laptops, and improved their general technology skills. It is likely that the need for efficiency in completing online activities and assignments drove these changes. Although it is possible that hybrid degree programs will attract more technologically savvy and independent students, it should not be assumed that students who enroll in hybrid courses have critical technology skills (Kvavik, 2005). Those who do not will be disadvantaged by the program delivery method. In order for students to focus on course content, it is critical that technology not be an obstacle to student access to course materials and support resources. As such, hybrid degree programs should identify and require base-
  • 76.
    level technology skillsor offer training opportunities that prepare students with technology skills before classes begin (Gastfriend, Gowen, & Layne, 2001). These minimum technology skills should be communicated in college materials, advising sessions, and program or course orientations. In addition, instructors should not assume that students have experience with the technologies used or that they have the ability to adopt new skills quickly. Even if students enter the program with a minimum set of technology skills, additional training or modeling during face-to-face classes, and written procedures and tutorials made available to all students will decrease concerns with technology and increase student ability to focus on content. Continually Evaluate the Program Instructors in this study noted that as knowledge was created and brokered during seminars and brown bag discussions, through formative feedback from students, and via the summative online questionnaire, evaluation practices helped them better understand and assess the implications of hybrid course and program design. In addition to traditional course evaluations, ongoing program evaluation must be implemented to continually improve instruction and student learning in any hybrid degree program (Levin,
  • 77.
    Levin, Buell, & Waddoups,2002). Also, program evaluation and assessment must be based on multiple methods and must meet specific standards to ensure representation of the program's impact on administrators, faculty and students (Quality on the line, 2000). Normally a new program would undergo rigorous scrutiny, with intense ongoing evaluation procedures that lessen over time as issues are worked out and satisfaction levels stabilize. However, with technology playing an integral role in hybrid courses, as new tools are made available or new uses for tools become established, ongoing innovation and refinement of courses, program delivery, and program structure becomes more necessary than in traditional face-to-face design. If this is the case, then the call for ongoing program evaluation policy would be meaningful to administrators, instructors, and to students. Granted, systematically embedding data-driven decision making within a hybrid program would require more resources of time and money than one might normally commit. Not planning at the onset for continual innovation Journal of Research on Technology in Education 347
  • 78.
    and evaluation wouldbe a mistake for a hybrid program not wishing to compromise quality. CONCLUSIONS Although the scope of this study was limited to nine instructors and their respective students, the results provide interesting and relevant findings for those interested in hybrid program design. The data collected indicate areas of success as well as areas for improvement, but overall the hybrid design was well received. The implications drawn represent a comprehensive dataset and demonstrate practices that must be thoughtfully considered by program developers before offering a hybrid degree program. While the primary factor in any instructional initiative remains the quality of the instructional design (Johnson & Aragon, 2002), the implications identified in this article intend to affect the success of students enrolled in a hybrid degree program directly. It is hoped that this study will spur further research in this area, as over time student profiles will include more technology-sawy populations needing to balance education with personal and professional obligations. For institutions of higher education wanting to offer innovate programs that
  • 79.
    accommodate student needs, hybriddegree programs may provide the answer. Any such program should be strategically designed, coUaboratively developed, and implemented within a community vested in offering a successful program. Contributors Audrey Amrein-Beardsley is an assistant professor in the College of Teacher Education and Leadership at Arizona State University. Dr. Amrein-Beardsley holds a PhD in Educational Policy and Research Methods and specializes in tests, assessment, and survey research. (Address: Audrey Amrein-Beardsley, PO Box 37100, MC 3151, Phoenix, AZ, 85069-7100, Phone: 602.543.6374; Fax : 602.543.7052 ; E-mail: [email protected]) Teresa S. Foulger is an assistant professor in the College of Teacher Education and Leadership at Arizona State University. Dr. Foulger holds an EdD in Educational Technology and specializes in technology-rich environments where collaboration, communities of practice, and innovative professional development models spur organizational change. (Address: Teresa S. Foulger, PO Box 37100, MC 3151, Phoenix, AZ, 85069-7100; Phone: 602.543.6420; Fax : 602.543.7052; E-mail: [email protected]) Meredith Toth is an instructional designer with the Applied
  • 80.
    Learning Technologies Institute atArizona State University. She holds a M.A. in Learning, Design, and Technology from Stanford University and specializes in technology integration in higher education. (Address: Meredith Toth, PO Box 37100, MC 1051, Phoenix, AZ, 85069-7100 ; Phone: 602.543.3192 ; E-mail: [email protected]) References Allen, I., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. The Sloan Consortium. Retrieved November 25, 2006, from http://www.sloan- c.org/publications/survey/pdf/growing_by_degrees.pdf 348 Summer 2007: Volume 39 Number 4 Allen, I., & Seaman, J. (2006). Making the grade: Online education in the United States, 2006. The Sloan Consortium. Retrieved March 3, 2007, from http://www.sloan- c.org/publications/survey/pdf/Making_the_Grade.pdf Anastasiades, P. S., & Retalis, S. (2001, June). The educational process in the emerging information society: Conditions for the reversal of the linear model of education and the development of an open type hybrid
  • 81.
    learning environment. Paper presentedat the ED-MEDIA 2001 World Conference on Educational Multimedia, Hypermedia & Telecommunications, Tampere, Finland. ERIC Document Number: ED466129. Retrieved April 28, 2006. Ausburn, L. (2004). Course design elements most valued by adult learners in blended online education environments: An American perspective. Educational Media International, 41 {A), iTJ-iH. Aycock, A., Garnham, C , & Kaleta, R. (2002). Lessons learned from the hybrid course project. Teaching with Technology Today, 8(6). Accessed online April 12, 2005 at http://www.uwsa.edu/ttt/articles/garnham2.htm Bailey, K., & Morais, D. (2004). Exploring the use of blended learning in tourism eduation. Journal of Teaching in Travel & Tourism, 4{4), 23-36. Benbunan-Fich, R., & Hiltz, S. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46{A), 2 9 8 - 312. Bennett, G., & Green, F. P. (2001). Student learning in the online environment: No significant diflFerence? Quest, 53, 1-13. Benson, A. (2002). Using online learning to meet workforce
  • 82.
    demand: A case studyof stakeholder influence. Quarterly Review of Distance Education, 3(4), 4 4 3 ^ 5 2 . Benson, A. (2003). Dimensions of quality in online degree programs. American Journal of Distance Education, 170), 145-159. Blackboard (1997). Blackboard (Version 6.2) [Computer software]. Washington, DC: Blackboard Inc. Bonk, C , Olson, T., Wisher, R., & Orvis, K. (2002). Learning from focus groups: An examination of blended learning. Journal of Distance Education, 77(3), 97-118. Boyle, T , Bradley, C , Chalk, P, Jones, R., & Pickard, P (2003). Using blended learning to improve student success rates in learning to program. Journal of Educational Media, 28{2/3), 165-178. Brown, J. S., &C Duguid, P. (2000). The social life of information. Boston: Harvard Business School Press. Buckley, D. (2002). Pursuit of the learning paradigm: Coupling faculty transformation and institutional change. Educause Review, 37(1), 28-38. Cheng, H., Lehman, J., & Armstrong, P (1991). Comparison of performance and attitude in traditional and computer conference
  • 83.
    classes. The American Journalof Distance Education, 5(3), 5 1 - 6 4 . Cousin, G., & Deepwell, F. (2005). Designs for network learning: A communities of practice perspective. Studies in Higher Education, 30(1), 57-66. Journal of Research on Technology in Education 349 Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. DeTure, M. (2004). Cognitive style and self-efficacy: Predicting student success in online distance education. American Journal of Distance Education, 75(1), 21-38. Donnelly, R. (2006). Blended problem-based learning for teacher education: Lessons learnt. Learning, Media, & Technology, 37(2), 93-116. Dziuban, C , Hartman, J., & Moskal, P (2004). Blended learning. EDUCAUSE Center for Applied Research Research Bulletin. Accessed online January 21, 2007, at http://www.educause.edu/ir/library/pdf/ERB0407.pdf Fotilger, T (Summer 2005). Innovating professional development standards:
  • 84.
    A shift toutilize communities of practice. Essays in Education, 14, Retrieved September 20, 2006, from http://www.usca.edu/essays/voll4summer2005.html Garrison, D. R. (1985). Three generations of technological innovation in distance education. Distance Education, 6(2), 235—241. Gastfriend, H. H., Gowen, S. A., & Layne, B. H. (2001, November). Transforming a lecture-based course to an Internet-based course: A case study. Paper presented at the National Convention of the Association for Educational Communications and Technology, Atlanta, Georgia. ERIC Document Number: ED470085. Retrieved April 28, 2006. Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ. New York: Bantam Books. Graham, C. R., Allen, S., & Ure, D. (2005). Benefits and challenges of blended learning environments. In M. Khosrow-Pour (Ed.), Encyclopedia of information science and technology (pp. 253-259). Hershey, PA: Idea Group. Hitch, L. P. (2002). Being prepared for technology snow days. ECAR Research Bulletin, 24. Retrieved March, 2003, from http://www.educause.edu/ LibraryDetailPage/666?ID=ERB0224
  • 85.
    Husmann, D. E.,& Miller, M. T. (2001). Improving distance education: Perceptions of program administrators. Online Journal of Distance Learning Administration, IV(III). Retrieved September 23, 2006, from http://vvTvw. westga.edu/-distance/ojdla/articles/fall2001/husmann43.pdf Iverson, J. O., & McPhee, R. D. (2002). Knowledge management in communities of practice: Being true to the communicative character of knowledge. Management Communication Quarterly, 16(2), 259—266. Johnson, S.D. & Aragon, S.R. (2002, Spring). An instructional strategy framework for online learning environments. Paper presented at the Academy of Human Resource Development (AHRD) Conference, Honolulu, Hawaii. Retrieved April 28, 2006, from ERIC database. Kincannon, J. M. (2002, April). From the classroom to the Web: A study of faculty change. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, Louisiana. ERIC Document Number: ED467096. Retrieved April 28, 2006. Kuhn, T. (2002). Negotiating boundaries between scholars and practitioners: Knowledge, networks, and communities of practice. Management Communication Quarterly, 16{), 106-112.
  • 86.
    350 Summer 2007:Volume 39 Number 4 Kvavik, R. (2005). Convenience, communications, and control: How students use technology. In D.G. Oblinger & J.L. Oblinger (Eds.), Educating the Net Generation (pp 7.1-7.20). Washington, DC: Educause. Retrieved January 8, 2005, from http://www.educause.edu/educatingthenetgen/ LaRose, R., Gregg, J., & Eastin, M. (1998). Audiographic telecourses for the Web: An ex^crmcnx.. Journal of Computer-Mediated Communication, 4(2). Retrieved March 2, 2007, from http://jcmc.indiana.edu/vol4/issue2/larose.html Lave, J., &C Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Learning Technology Center. (2002, March 12). Learning technology center: Our project. Retrieved May 2005 from http://www.uwm.edu/Dept/LTC/our- project.html Leh, A. (2002). Action research on hybrid courses and their online communities. Educational Media International, 39(1), 31-38. Levin, S. R., Levin, J. A., Buell, J. G. & Waddoups, G. L.
  • 87.
    (2002). Curriculum, Technology, andEducational Reform (CETER) online: Evaluation of an online master of education program. TechTrends, 46(5), 30—38. Machtmes, K. & Asher, J. W (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 7 4 ( 0 , 2 7 - 4 6 . McCray, G. (2000). The hybrid course: Merging on-line instruction and the traditional classroom. Information Technology & Management, 1(4), p307-327. Mezirow, J. (2000). Learning to think like an adult: Core concepts of transformation theory. In J. Mezirow & Associates (Ed.), Learning as transformation: Critical perspectives on a theory in progress (pp. 3—33). San Francisco: Jossey-Bass. Miles, M.B. & Huberman, A.M. (1994). Qualitative data analysis (2"'' ed.). Thousand Oaks, CA: Sage Publications. Misko, J. (1994). Flexible delivery: Will a client-focused system mean better learning^ KdA^idt: Adelaide National Centre for Vocational Education Research. Moore, M. (1993). Is teaching like flying? A total systems view
  • 88.
    of distance cducdMon. AmericanJournal of Distance Education, 7(1), 1-10. Neuhauser, C. (2002). Learning style and effectiveness of online and face-to- face instruction. American Journal of Distance Education, 16(2), 99-113. Nunnally, J. C. (1978). Psychometric Theory (2'"' ed.). New York, NY: McGraw Hill. Olapiriyakul, K., & Scher, J. (2006). A guide to establishing hybrid learning courses: Employing information technology to create a new learning experience, and a case study. Internet & Higher Education, 9(4), 287-301. O'Toole, J. M., & Absalom, D. (2003). The impact of blended learning on student outcomes: is there room on the horse for Vf/oii Journal of Educational Media, 28(215), 179-190. Overbaugh, R., & Lin, S. (2006). Student characteristics, sense of community, and cognitive achievement in web-based and lab-based learning environments. Journal of Research on Technology in Education, 39(2), 205- 223. Journal of Research on Technology in Education 351 Parkinson, D., Greene, W , Kim, Y, & Marioni, J. (2003).
  • 89.
    Emerging themes of studentsatisfaction in a traditional course and a blended distance course. TechTrends, 47(4), 22-28. Phipps, R., & Merisotis, J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy. Accessed online on February 16, 2007 at http://www.ihep.org/Pubs/PDF/Difference.pdf Quality on the line: Benchmarks for success in internet-based distance education. (2000). Washington, DC: The Institute for Higher Education Policy. Retrieved September 16, 2006, from http://www.ihep.com/Pubs/PDF/Quality.pdf#search =%22Quality%20on%20the%20line%20benchmarks%22 Reeves, T., Herrington, J., & Oliver, R. (2004). A development research agenda for online collaborative learning. Educational Technology Research and Development, 52(4), 53-65. RifFell, S., & Sibley, D. (2005). Using web-based instruction to improve large undergraduate biology courses: An evaluation of a hybrid course format. Computers & Education, 44(3), 217-235. Rogers, A. (2002). Teaching adults (3"* ed.). Philadelphia: Open University
  • 90.
    Press. Russell, T. (1999).The no significant difference phenomenon. Chapel Hill, NC: Office of Instructional Telecommunications, University of North Carolina. Sherer, P D., Shea, T. P, & Kristensen, E. (2003). Online communities of practice: A catalyst for faculty development. Innovative Higher Education, 27(3), 183-194. Sikora, A. (2002). A profile of participation in distance education: 1999-2000. Retrieved January 8, 2005, from National Center for Education Statistics Web site http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2003154 Skylar, A., Higgins, K., Boone, R., Jones, P, Pierce, T , & Gelfer, J. (2005). Distance education: An exploration of alternative methods and types of instructional media in teacher education. Journal of Special Education Technology, 20(3), 25-33. Snell, C , & Penn, E. (2005). Developing an online justice studies degree program: A case study. Journal of Criminal Jtistice Education, 16(), 18—36. Summers, J., Waigandt, A., & Whittaker, T. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face
  • 91.
    statistics class. InnovativeHigher Education, 29(3), 233-250. Sumsion, J., & Patterson, C. (2004). The emergence of community in a preservice teacher education program. Teaching and Teacher Education, 20(6), 621-635. Swenson, P, & Evans, M. (2003). Hybrid courses as learning communities. In S. Reisman (Ed.), Electronic learning communities issues and practices (pp. 27-72). Greenwich, CT: Information Age Publishing. Thompson, M. (2005). Structural and epistemic parameters in communities of practice. Organization Science, 16(2), 151-164. Twigg, C. A. (1999). Improving learning & reducing costs: Redesigning large- enrollment courses. Center for Academic Transformation. Retrieved April 20, 2005, from http://www.thencat.org/Monographs/monol.pdf 352 Summer 2007: Volume 39 Number 4 Twigg, C. A. (2001). Innovations in online learning: Moving beyond no significant difference. Taylor, NY: Pew Learning and Technology. Twigg, C. A. (2003). Improving learning and reducing costs: New models For online learning. Educause Review, 35(5), 28-38.
  • 92.
    Veronikas, S. W, & Shaughnessy, M.F. (2004). Teaching and learning in a hybrid world: An interview with Carol Twigg. Educause Review, 39(July/ August), 51-62. Retrieved February, 2006, From http://www.educause.edu/ apps/er/ermO4/ermO44.asp Waits, T. & Lewis, L (2003). Distance education at degree- grantingpostsecondary institutions: 2000-2001 (NCES 2003-017). U.S. Department oFEducation, National Center For Education Statistics. Retrieved January 8, 2005 From http:// nces.ed.gov/pubsearch/pubsinFo.asp?pubid=2003017 Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge: Cambridge University Press. Wenger, E., McDermott, R., & Snyder, W. M. (2002). Cultivating communities of practice: A guide to managing knowledge. Boston: Harvard Business School Press. Wilke, D., & Vinton, L. (2006). Evaluation of the first web- based advanced standing MSW prograsn. Journal of Social Work Education, 42(3), 607-620. Woods, R., Baker, J., & Hopper, D. (2004). Hybrid structures: Faculty use and perception oF web-based courseware as a supplement to face-to- Face instruction. Internet & Higher Education, 7(4), 281-297.
  • 93.
    Yang, Y, &Corneliotis, L.F. (2004, October). Ensuring Quality in Online Education Instruction: What Instructors Should Know? Paper presented at the Association For Educational Communications and Technology conFerence, Chicago, Illinois. ERIC Document Number: ED484990. Retrieved September 23, 2006. Young, S. (2006). Student views oFeffective online teaching in higher education. The American Journal of Distance Education, 20(2), 65—77. Journal of Research on Technology in Education 353 APPENDLX: STUDENT/INSTRUCTOR HYBRID EVALUATION QUESTIONNAIRE PART I: DEMOGRAPHIC QUESTIONS What is your age? (Students only) Where do you primarily access a computer For schoolwork? (Students onlv) How do you most ohen col Internet? (Students only) Degree (Students only) Current Semester
  • 94.
    l.ourse I ('oiirse Iitlc LAST Name oi vour Instructor POSSIBLE RESPONSES Younger than 18 18 to 25 26 to 35 36 to 45 46 to 55 Older than 56 Home (desktop) Mobile (laptop) Student computer center Library Friend/Relative residence Other Home (high-speed)! Home (dial-up) Away-trom-honie (high-speed) Away-from-home (dial-up) Undergraduate Graduate Post-Baccalaureate Elementary Education Secondary Education Special Education
  • 95.
    Semester 1 Semester 2 Semester3 ScnlC'̂ ter i One For this semester, how many of your courses ._, incorporated online days? (Students only) Five For this course, approximately liow many ^ J face-to-face days were replaced vviih online activities this semester? (Instructors onlv) ^ 354 Summer 2007: Volume 39 Number 4 pip |ooa S!t S3.'>U3U3dx3 AlU |OO1 SU(̂ J nil MM MM HIM MM I c o
  • 96.
    T3 -o ^ o E .5aQ ••̂c c .S p § J! 3i =5 E S g 2 .1 -̂ ^ 4> O VI '•.': O O LU "3 o u •6 c .2 I I 1 "̂ P u -r: -= - : Journal of Research on Technology in Education 355
  • 97.
    Summer 2007: Volume39 Number 4 ..2 — o i i. i2 ou 3 — c o C C o o E II U 3 Journal of Research on Technology in Education 357
  • 98.
    www.changemag.org 45 Christina Leimer([email protected]) is associate vice president for institutional effectiveness at California State University-Fresno. Drawing on her experience at two universities and a community college, she conducts research, writes, speaks, and consults about organizing for evidence-based decision making and improvement and how we define, judge, and achieve effectiveness. By Christina Leimer ORGANIZING FOR Evidence-Based Decision Making and Improvement In today’s accountability climate, regional accrediting bodies are requiring colleges and universities to develop and sustain a culture of evidence-based decision mak-ing and improvement. But two-thirds of college presidents in a 2011 Inside Higher Ed survey said their institutions are not particularly strong at using data for making decisions. And despite accreditors’ intense focus on learning outcomes as a core piece of evidence of institutional effectiveness, a 2009 National Institute for Learning Outcomes Assessment (NILOA) survey revealed that 60 percent of provosts believe that they need more faculty engagement and more technical expertise to strengthen the assess- ment of learning on their campuses. In my 17 years of working in institutional research, learning outcomes assessment,
  • 99.
    strategic planning, andaccreditation, I have watched the external demands for the col- lection and use of information escalate. The response is often an increased data flow that occurs a year or so before the accreditors arrive and ebbs when they leave campus. But I have also experienced faculty and administrators becoming enthusiastic as they engage in the design, collection, analysis, and discussion of data and the decision making that it informs. When sustained, this enthusiasm for information about institutional performance becomes part of campus culture. When this doesn’t happen, it is generally not for a lack of expertise about how to con- duct such research—colleges and universities are filled with people who know how to do just that. Nor is the problem solely about autonomy, although that certainly plays a major role in resistance to accountability demands, as do concerns about unfair comparisons of institutions and the difficulty of measuring complex skills and organizational impacts. Instead, two major elements are often missing that are necessary to spark and sustain evidence-based decision making and improvement. One is leadership in making sense of, strategically applying, and communicating data and findings to diverse audiences in ways that prompt organizational learning and stimulate people’s desire to know more and then to act on the information. The other is a highly visible institutional research (IR) function that is integrated with complementary functions such as
  • 100.
    assessment and planningand that is widely recognized as integral to shaping institutional policy and practice. 46 Change • July/August 2012 The Role of leadeRship in evidence-Based decision Making Evidence-based decision making—which accreditors expect colleges and universities to engage in continuously— combines professional experience with data, research, and literature to draw conclusions, make judgments, and determine courses of action. When an information-based mode of thinking and working is part of the culture, people reflexively ask questions and search for relevant data before deciding on a new program or developing initiatives. They routinely evaluate learning, processes, and progress toward goals to determine whether the programs and initiatives are achieving the desired outcomes. In such a culture, reflecting on practice and asking “how do we know?” is standard fare. Developing such a culture takes sustained effort over a long period of time at multiple levels of the organization. But someone needs to take the lead—to advocate for, and maintain focus on, this mode of thinking and practice. On most campuses, no position or office is assigned this role. An IR office and other operational units may provide data, but this in itself does not promote their use, nor is their application self-explanatory. For culture to change, someone must turn data into information and institutional knowledge through analysis and interpretation. Then someone needs to be responsible for setting that knowledge in the context of
  • 101.
    institutional goals anddisseminating it in multiple formats that are appropriate to particular stakeholders, in order to inform recommendations and planning. By participating in initial and ongoing discussions of pro- grams and initiatives, personnel with research and evaluation backgrounds can help frame questions so that they can be answered empirically and relate to issues of concern. They can then help communicate the results to the campus. Over time, an accumulation of examples of the positive effects of data use will help keep evidence-based decision making a valued component of campus life. IR offices can play a significant role in such change, yet they are often underutilized. In a 1996 survey, 90 percent of college presidents said they wanted their IR offices to be proactive, but only half said that they were fulfilling this ex- pectation. Some long-term IR professionals also recognize that the conventional IR role is too narrow for the issues fac- ing higher education today, but their offices may have insuf- ficient staff or expertise to take on higher-level challenges. It is often the case as well that campus leaders perceive IR as a merely technical or reporting office that is too low in the hierarchy to be involved in strategic discussions. Whatever the reason, at most campuses, IR has neither been assigned nor assumed a prominent role in culture change. This is why I sometimes hear from senior admin- istrators and faculty accreditation leaders that “more” than IR is needed. But it is often unclear what that “more” is and how to achieve it. new Models foR fosTeRing evidence-Based decision Making
  • 102.
    Literature on themechanics of learning outcomes assess- ment and the technical aspects of conducting IR is volumi- nous, but not many models exist for how to organize for ev- idence-based decision making and improvement. During the last 15 years, institutional effectiveness (IE) offices and units have emerged as one response: The Directory of Higher Education listed 43 IE offices in 1995 and 375 in 2010. The number of IR offices increased during this period as well, from 672 to 1,499. So what are the differences between these two types of operation? To find out, I analyzed 30 IR and 30 IE office websites to examine their missions, structures, staffing, and responsibili- ties; to identify similarities and differences between the two; and to look for clues about why IE emerged. The responsibilities and purposes of these offices differ across campuses. In some cases, IE is simply a rebranding of IR. In others, the primary responsibility of IE offices is learning outcomes assessment. However, another configu- ration attempts to fill needs beyond those of conventional IR. In this arrangement, IE is an umbrella title for a unit or department that performs multiple “quality” functions: IR, planning, assessment, academic and administrative program review, and accreditation. To further investigate the purposes and operations of this configuration, I conducted semi-structured phone interviews with the lead managers of such offices at 19 US colleges and universities. The purpose was to determine how and why these offices began, how they are organized, why this con- figuration was chosen, and its benefits and challenges. The institutions studied include a range of public and pri- vate institutional types, sizes, Carnegie classifications, ac- creditation regions, and geographic locales. They are all not
  • 103.
    for profit, withenrollments from less than 5,000 to 30,000, although the sample contains more smaller than larger insti- tutions. In only six of the 19 cases were these offices designated as IE; one was entitled IR. Most often, they had hybrid names such as Institutional Research, Assessment, and Planning. While many offices are named this way, they usually do not have administrative oversight of all of these functions. Instead, they provide data and research that supports some of these functions, which are carried out by others. When an information-based mode of thinking and working is part of the culture, people reflexively ask questions and search for relevant data before deciding on a new program or developing initiatives. www.changemag.org 47 Because the names vary, I refer to the configuration that combines administrative responsibility for the quality func- tions for the purpose of evidence-based decision making and improvement as the integrated model (IM). Most of the of- fices in my study had assumed an integrated form within the last nine years, and they were still changing as they adapted to new needs.
  • 104.
    The inTegRaTed Model(iM) The IM model is a solution to a need for culture change that exceeds the capabilities of conventional IR offices to support. While they still analyze data, IM offices take more of a leadership role than conventional IR ones do. IM personnel educate and advocate for the use of evidence in decision making. They may also bring their knowledge of external trends and issues affecting higher education and their institutions into presentations, analyses, and discus- sions in ways that can help challenge assumptions, deepen questioning and exploration, and prompt reflection that can lead to change. Personnel in these offices advise and consult with execu- tives, middle managers, and faculty. They coordinate, facili- tate, and develop processes, procedures, and structures that help make data use part of the culture, such as workshops, blogs, research review teams, or linkages between assess- ment and planning. They monitor and document progress to- ward strategic planning goals and play a key role in program review or accreditation. Evaluating initiatives and programs or partnering with operational managers to do so is common. IM office personnel may participate in establishing insti- tutional goals through committee memberships, consulting with managers, and/or facilitating goal-setting processes such as retreats, forums, or other planning activities. They offer methodological training to managers and faculty to help them assess performance in their own areas. In assuming responsibility for encouraging the use of re- sults, these offices act as catalysts for change. For instance, they may initiate opportunities to engage constituents in the institution’s research agenda. Doing so creates familiarity with the process, demonstrates its value, garners support,
  • 105.
    and improves thequality of research and evaluation by bringing diverse perspectives to complex questions. By linking the use of evidence to problems of interest to constituents, they may be able to spark curiosity and influ- ence attitudes and perspectives that help develop an appre- ciation for data use. Integrating these functions coordinates a set of tools that helps executives, senior managers, and faculty identify where the organization is successful and where it is lagging, thereby helping to focus on internal im- provement. At many colleges and universities both IR and assessment offices are chronically understaffed, as presidents respond- ing to a 1996 survey acknowledged. Despite 15 years of increasing demands, most IR offices are still one- or two- person departments, and in a 2009 survey, NILOA found few resources devoted to learning outcomes assessment. In such cases, staffing may need to increase. However, integrating quality-improvement functions and drawing on their natural fit, respondents in the study said, creates greater efficiency, better products, synergies, and focus. So while the configuration does not allow for fewer staff, the office’s productivity may well increase. Staff in in- tegrated offices can more equitably distribute their work and make better use of individuals’ strengths. Bringing together their multiple skills and perspectives allows for richer analy- ses and a larger view of institutional issues and provides op- portunities for staff to learn from each other. This is helpful for developing the implications of and contextualizing data and other research findings. Uniting complementary skill sets creates another benefit. In general, IR professionals have stronger technical skills
  • 106.
    than assessment professionals,and assessment professionals possess better interpersonal skills than their IR colleagues. Both skill sets are needed, but the combination may be dif- ficult to find in one individual. When assessment and IR pro- fessionals work together, the products and services they can offer become a stronger force for change. In addition to this greater tangible value, a high-visibility department whose responsibilities reflect the organization’s commitment to effectiveness can keep this method of operat- ing in collective awareness. Personnel who find opportunities to consult, providing user-friendly information and engaging in ongoing discussions of institutional goals and problems, create an effectiveness orientation and normalize the use of evidence in making decisions. Changing culture is a complex undertaking that requires ongoing effort from many people in different parts of the organization using their various types of authority and influ- ence. The IM office can be a crucial participant in this effort. Building capaciTy Integrating Functions Integrating the quality functions can fill both the leader- ship and infrastructure gaps that impede data-informed deci- sion making and the development of a culture of evidence and improvement. But to do so, institutions first need to take stock of the existing functions, their current locations, and the extent to which they are collectively performing culture- development tasks. Not only will this illuminate gaps in responsibilities and institutional impediments to change—it may identify personnel with unused skills who can be culti- vated or professionals who want to expand their skill sets.
  • 107.
    The 19 IMoffices studied included some similar ele- ments. The majority combined IR, assessment, and IM personnel educate and advocate for the use of evidence in decision making. accreditation. Nine had strategic-planning responsibilities as well, and five included academic and/or administrative pro- gram review. Some offices performed additional functions, such as institutional budgeting, business intelligence, grant management, market research, and the student evaluation of teaching. All of the offices in this study combined their chosen set of functions in a centralized unit, although the compo- nents included varied. One research university, for example, merged IR, learning outcomes assessment, program evalu- ation, decision support, and business intelligence into a single unit. An undergraduate teaching university combined IR, learning outcomes assessment, strategic planning, ac- creditation, testing, program review, and university relations and communications in creating the IM office. Although it was not part of my study, perhaps the oldest and best-known integrated unit is at Indiana University–Purdue University Indianapolis, where the division is called Planning and Institutional Improvement. Its functions include IR, informa- tion management, institutional planning, learning outcomes assessment, program review, economic modeling, and the testing center. Integration can be achieved in a more decentralized
  • 108.
    manner as well.At my own institution, California State University–Fresno, IR and learning outcomes assessment are the functions of the IM office, but strategic planning is located with the president, accreditation with the associate provost, and academic program review with the undergradu- ate and graduate deans. My membership on the strategic planning committee, the accreditation core team, and aca- demic program-review teams allows me to apply the tools of my office to university goals and quality-assurance processes and to recommend ways to improve them. In addition, these functions link up in various ways. For example, ongoing learning outcomes assessment is incorpo- rated into periodic program review through the self-study. Like most of the offices in my study, we continue to develop mechanisms that strengthen these connections as this six- year-old configuration evolves. Developing the Structural Configuration The majority of the offices in the study were developed intentionally, prompted by accreditation requirements and/ or the vision of the president. Four have evolved in this di- rection over time, and five are being developed on the fly as needs arise. In some cases, the offices are brand new; in oth- ers, existing offices are being expanded or multiple offices merged. Executives’ authority and engagement in evidence-based decision making is critical to planning and developing such a configuration. The lead manager of an IR, assessment, or planning office may propose a plan, as happened at some in- stitutions in the study, but only presidents and provosts have the authority to change infrastructure, allocate resources, and set institutional priorities and direction. Therefore, they must visibly take the lead in establishing such a configuration and
  • 109.
    must support thisnew approach on an ongoing basis to en- sure that it succeeds in influencing culture. Crucial to this support is a multi-year plan in which the unit is incorporated into relevant decision-making venues, responsibilities are shifted or added, staff are relocated or hired, and personnel and office titles are changed as needed. Depending on need, circumstances, and resources, the con- figuration can be created gradually or rapidly. An existing IR or assessment office usually serves as the nucleus to which the other functions are added. Independent offices in different divisions are sometimes merged into a single unit. Especially in large colleges and universities, in- dividual staff members often perform IR or assessment work in an operational unit such as a registrar’s or dean’s office. Moving them into the new IM office may be an option that provides them with colleagues from whom they can learn and gain support while adding capacity or gaining efficien- cies. Any type of organizational relocation or merging of offices and personnel requires ongoing attention to facilitate a smooth transition, help a group of individuals coalesce into a team, and ensure that managers who lose staff members continue to get their needs met. Two cautions should be kept in mind in developing and managing an integrated office. First, the focus and efficiency of such an office can be unintentionally diluted by add- ing responsibilities that detract from the goal of fostering an evidence-based decision-making culture. One manager described her developing office as attracting programs that were not working with the expectation that she would im- prove them. Another said that activities for which there was no other home or that no one else wanted were given to her department. This vulnerability is heightened in offices that evolve organically and in ones that have been renamed IE
  • 110.
    without their responsibilitiesbeing clearly defined. The second caution is to ensure that the new configura- tion does not become a super-compliance office, orienting the majority of its activity to external accountability rather than to internal improvement. One manager suggested that accreditation should be excluded from this new arrange- ment because it carries so much weight that it could have this effect. 48 Change • July/August 2012 Bringing together their multiple skills and perspectives allows for richer analyses and a larger view of institutional issues and provides opportunities for staff to learn from each other. www.changemag.org 49 Reporting Lines Almost all of the lead managers of these offices reported directly to the president or the chief academic officer—one to both. Only one of them reported below the vice-presiden- tial level. Access to high-level decision makers is impor- tant to IM managers’ ability to work across organizational boundaries and stay abreast of institutional issues on which
  • 111.
    they can bringthe tools of their offices to bear. More than half of the lead managers in my study held a title higher than director, ranging from senior director to vice president. Five were members of their presidents’ cabinets. Naming As mentioned, there is no consistency in the titles of these offices. It appears that the name IE reflects accreditors’ em- phasis on demonstrating effectiveness, regardless of the spe- cific set of tasks or functions that comprise the office. Among the offices in the study that were named IE or for which there were plans to do so, the title was chosen for two reasons. First, it reflected the purpose for which the tools of planning, research, and assessment were going to be used— institutional improvement and effectiveness, as institutionally defined—rather than focusing on the tools themselves. The other reason was because the responsibilities were intended to be broader than those of any of the functional areas. Tidewater Community College is an example of an institu- tion that uses the name IE for the entire unit, without losing the titles that connote specific functions: Departments within the unit are called IR and Student Outcomes Assessment (SOA). The unit manager’s title is director of IE, while staff positions are designated with a title followed by the quali- fiers IR or SOA. While there may be institution-specific reasons for par- ticular titles, the dissimilarities and incongruencies between titles and responsibilities across campuses is detrimental in multiple ways. Making institutional comparisons and locating models that are the most efficient and effective is nearly im- possible when they cannot be identified by a common title.
  • 112.
    Hiring also becomesmore difficult because the department and staff titles candidates have can include skills and experi- ences that may be quite different from those the hiring man- ager might expect based on those titles. The scope of respon- sibilities of individuals with the same title can vary widely, as can their salaries. In the current climate, where financial considerations are paramount at most institutions, greater consistency in the titles of units, personnel, and responsibili- ties would help make organizing for evidence-based decision making more effective and perhaps less costly. Staffing As is true of freestanding IR and assessment offices, staff- ing is a challenge in integrated offices. The primary issue is too few staff and, to a lesser degree, insufficient expertise of the existing staff. Several of the offices in the study were understaffed, usually by at least one position. And even when funding is allocated for positions, experienced IR and as- sessment professionals are difficult to find. In implementing an integrated model, campus leaders should conduct an analysis to determine workload require- ments and gaps in functioning. Personnel will need to be trained to fill those gaps, and additional staff may be needed. However, it is also possible that some tasks can be elimi- nated or shifted to other departments to optimize the use of existing personnel. For instance, a common complaint among IR profession- als is that external reporting requires so much of their time that they cannot use their research methods and statistical skills for institutional improvement. Not only does this mis- match rob the institution of the full value these professionals can offer—it is a reason many new IR professionals leave
  • 113.
    the field. Inlight of changes in computing technology that allow pre-packaged reports to be developed, it may be pos- sible to shift some external reporting to other departments, such as the operating unit that generates the data or to the information-technology unit. Since the range of responsibilities in integrated offices is broader than those of a typical IR office, so are the skills, abilities, and personal traits that lead managers in IM of- fices need. To varying degrees, experience with and skills in research methods, statistical techniques, data analysis, statis- tical software, and database management are fundamental. But organizational, project-management, group-facilitation, and written and oral communication skills are important too, as are strong interpersonal skills that enable these managers to work effectively with a range of institutional constituents, from line staff and faculty to middle managers and execu- tives. The abilities to build consensus, negotiate, communi- cate in non-technical language, coordinate people and proj- ects, and lead are key. Personal characteristics needed include sensitivity, open- mindedness, flexibility, a capacity to listen, enthusiasm, a commitment to learning, a sense of humor, the ability to build others’ self-confidence and motivate them, creativity, A common complaint among IR professionals is that external reporting requires so much of their time that they cannot use their research methods and
  • 114.
    statistical skills forinstitutional improvement. 50 Change • July/August 2012 team-building and problem-solving capacities, a thick skin, a tolerance for ambiguity, and patience. So too are the abilities to educate, build trust, and use data to tell a compelling story. It is essential that IM professionals know what data are available and how they can be applied, as well as which methodologies can be used to answer questions. They need to understand the types of problems higher education man- agers must address, how colleges and universities operate, and how decisions are made there. They need to understand the political world of academia and how to work with oth- ers to reach institutional goals. They need to comprehend higher education culture and the culture of their particular institutions, as well as the external environment at the local, regional, national, and even international level as it impinges on institutional operations, problems, and goals. Developing a solid understanding of the intricacies of institutional data and their appropriate use at a particular institution takes years. Consequently, these offices will be better able to assist with institutional improvement and goal achievement if they retain early-career professionals. Most of the lead managers in the study were long-term IR or assessment professionals—primarily the former—with at least 10 years of experience. A little more than half had worked in the field at least 15 years. Half had developed
  • 115.
    their skills andknowledge within their institution; the other half were recruited externally. Most seemed to revel in the challenges and opportunities of changing culture. These professionals’ combination of technical, interper- sonal, and organizing skills allowed them to shape their new offices and positions. When recruiting candidates to lead integrated offices, the Association for Institutional Research IM professionals…need to understand the types of problems higher education managers must address, how colleges and universities operate, and how decisions are made there. Leadership in integrated offices is more strategically than technically oriented. In an article that has become a classic in the IR profession, Terenzini (1999) describes three forms of intelligence that are necessary for high-performing IR staff: technical/analytical skills, a knowledge of the issues, and contextual intelligence. Leadership that can influence culture requires all three; I describe how each can affect culture change in the right-hand column of the following table. Intelligence Type Actions Technical/Analytical • Select appropriate institutional data and assure the accuracy of their use • Offer technical, research, and assessment
  • 116.
    expertise • Demonstrate utilityof data-driven decisions to campus constituencies Issues • Combine research capability and familiarity with the campus community and its issues for richer analyses • Explicate the implications of research findings and make evidence-based recommendations • Strengthen planning and program development • Anticipate stakeholders’ needs • Collaboratively frame and refine questions and focus possibilities for change Context • Communicate institutional issues in a broader context • Bring knowledge of higher education trends and issues to internal discussions to expand awareness • Apply knowledge and research findings to challenge institutional assumptions, prompt reflection, and stimulate change • Utilize institutional alliances and understanding of the institution’s culture to spread the use of data in decisions TaBle 1. TeRenzini’s Typology and iTs effecTs on culTuRe
  • 117.
    change www.changemag.org 51 n Davenport,T. H., Harris, J. G., & Morison, R. (2010). Analytics at work: Smarter decisions, better results. Boston, MA: Harvard Business Press. n Green, K. C., Jaschik, S., & Lederman, D. (2011). Presidential perspectives: The 2011 Inside Higher Ed survey of college and university presidents. Inside Higher Ed. Retrieved from http://www.insidehighered. com/sites/default/archive/storage/files /SurveyBooklet. pdf n Harrington, C. F., Christie, R. L., & Chen, H. Y. (1996). Does institutional research really contribute to institutional effectiveness? Perceptions of institutional research effectiveness as held by college and univer- sity presidents. Paper presented at the 36th Annual AIR Forum, Albuquerque, NM, May 5–8. n Knight, W. E., & Leimer, C. (2009). Will IR staff stick? An exploration of institutional researchers’ inten- tions to remain in or leave their jobs. Research in Higher Education, 51(2), 109. n Kuh, G., & Ikenberry, S. (2009, October). More than you think, less than we need: Learning outcomes as- sessment in American higher education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved from http://www.learningoutcomeassessment. org/MoreThanYouThink.htm
  • 118.
    n Leimer, C.(2009). Taking a broader view: Using in- stitutional research’s natural qualities for transformation. New Directions for Institutional Research, 143, 85–93 n Leimer, C. (2010, July). Wave of the future? Integrating institutional research, outcomes assessment, planning, program review and accreditation. Education Resources Information Center, ED521064. Retrieved from http://20.132.48.254/ERICWebPortal/search/re- cordDetails.jsp?ERICExtSearch_Descriptor=%22Cour se+Descriptions%22&ERICExtSearch_Facet_0=facet_ de&ERICExtSearch_FacetValue_0=%22Higher+Educat ion%22&_pageLabel=RecordDetails&accno=ED52106 4&_nfls=false n Leimer, C. (2011). The rise of institutional effective- ness: IR competitor, customer, collaborator, or replace- ment? AIR Professional File, 120. n Morest, V. S. (2009). Accountability, accreditation and continuous improvement: Building a culture of evi- dence. New Directions for Institutional Research, 143, 17–27. n Terenzini, P. T. (1999). On the nature of institutional research and the knowledge and skills it requires. New Directions for Institutional Research, 104, 21–29. n Volkwein, J. F. (2011, March 24). IR roles, respon- sibilities and reporting lines. [Webinar]. Tallahassee, FL: Association for Institutional Research, Professional Development Services Committee. Resources (AIR) and possibly the Society for College and University
  • 119.
    Planning would bereasonable places to contact, as would the AIR regional affiliates and regional accreditors. However, since IM offices are a recent phenomenon and the lead-manager role is a relatively new one, there is no for- mal training for developing and supporting leaders in deal- ing with the challenges of this emerging area. As this form of organizing grows, more professionals with this complex skill set will be needed. Because these professionals work closely with executives and across divisions and hierarchy, an executive-level leader- ship training program that addresses the challenges of a high level of ambiguity and a lack of direct operational authority in negotiating, mediating, facilitating, and changing culture would help mid-career IR and assessment professionals take on this role. In addition, it would offer the possibility of a career ladder to new professionals, many of whom leave IR within a few years, in part because opportunities for career advancement are typically scarce. Establishing Role Boundaries Creating anything new involves ambiguities and raises questions; hence, careful consideration must be given to defining the IM manager’s boundaries and authority. The primary responsibility for goal-setting and evaluation within particular units or departments should remain with the op- erational manager and, in the case of learning outcomes and academic program review, with the faculty. IM managers should not take on sole responsibility for assessing everything or for overseeing quality in general. Instead, they should make recommendations and develop processes, structures, and policies as a member of an execu- tive team.
  • 120.
    Advocating for useof evidence in decision making and institutional improvement and educating about how to do so is the most central role of an IM office. Effective advocacy requires a nuanced understanding of the institution’s culture and people in order to know when to make the best use of positional or expert authority, when to draw on relationships, and when to utilize high-profile or low-key approaches. Making research-based recommendations for change is an aspect of the role, but institutional decision making rests with those who are charged with the responsibility. conclusion When external environments become more complex and demanding, internal administrative structures usually become more complex to deal with those demands. Better access to data and more of it will not, in itself, meet the pub- lic’s expectations for higher education to be accountable and effective. It requires instead changes in both organizational structure and leadership. Integrating institutional research, learning outcomes assessment, strategic planning, program review, and accreditation can help colleges and universities achieve the culture of evidence and improvement needed to respond to external demands and move the institution into its chosen future. C Copyright of Change is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
  • 121.
    1 This chapter providestools, resources, and examples for engaging qualitative inquiry as a part of institutional research and assessment. It supports the development of individual ability and organizational intelligence in qualitative inquiry. A Qualitative Toolkit for Institutional Research Chrystal A. George Mwangi, Genia M. Bettencourt As an institutional researcher, Sam has just finished analyzing the results of their institutions’ most recent campus climate study. The quantitative find- ings show clearly that Students of Color have negative experiences both within academic courses and co-curricular involvement. Students of Color responded in high numbers to questions about microaggressions on cam- pus, indicating that these pervasive acts of racism permeate their daily ex- periences. Students of Color were also more likely to report feeling isolation on campus and dissatisfaction with the institution. Sam wants to know more about microaggressions on campus to be able to understand their different manifestations, the impact they have on Students of Color, and potential
  • 122.
    strategies for intervention.To meet these goals, Sam decides to conduct qualitative research centered on the voices of these students experiencing microaggressions. Qualitative research is the result of many different decisions, all of which are made within unique contexts. To illustrate these decisions and contexts, we use the example of Sam throughout this chapter. Like Sam, many institutional researchers find they need to integrate traditionally quantitative approaches with qualitative methodologies to obtain the full picture of student experiences in higher education. Qualitative methods naturally align with institutional inquiry that focuses on students’ experi- ences within a certain context or set of conditions (Harper & Kuh, 2007). As institutions engage in increasingly complex data-driven decision-making, “the best decisions are based on a deeper understanding than quantitative methods alone can provide” (Van Note Chism & Banta, 2007, p. 15). As such, it is crucial for institutional researchers and institutional research of- fices to develop qualitative expertise to support methodologies and meth- ods that can be applied to a spectrum of research questions (McLaughlin, McLaughlin, & Muffo, 2001). This chapter provides tools, resources, and
  • 123.
    NEW DIRECTIONS FORINSTITUTIONAL RESEARCH, no. 174 © 2017 Wiley Periodicals, Inc. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ir.20217 11 12 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE examples for effectively grounding and conducting qualitative inquiry as a part of institutional research and assessment. We review key qualitative skills and knowledge areas such as research paradigms, methodologies, and methods. Paradigms Paradigms, also known as worldviews, are “systems of beliefs and practices that influence how researchers select both the questions they study and methods that they use to study them” (Morgan, 2007, p. 50). All types of research are rooted in researchers’ paradigms. Paradigms emerge out of re- searchers’ epistemology, ontology, and axiology, shaping how knowledge is sought out and interpreted. These approaches shape the choices a researcher makes in what and how to pursue their topic. Although there are multiple classifications of paradigms, for
  • 124.
    simplicity, we utilize fouroverarching categories (Creswell, 2014; Mertens, 2015): The positivist paradigm focuses on explaining, testing, and predicting phe- nomena (Guido, Chávez, & Lincoln, 2010). Information is objective and value-free, and exists within one true reality. This paradigm has evolved into postpositivism by incorporating a more critical lens to examine how a cause determines an effect or outcome (Creswell, 2014). In the former, a researcher might conduct a study to prove a hypothesis is correct and to discover the truth. In the latter, researchers aim to reject a null (false) hypothesis to move closer to the truth. The constructivist, or interpretive, paradigm views knowledge as socially constructed and individuals’ experiences as framed by their unique con- text. Individuals have a subjective reality based on understanding their views (Creswell, 2014). Instead of a universal Truth, there are only truths that exist for individuals that are reliant on their context and time (Guido et al., 2010). The critical, or transformative, paradigm can incorporate numerous the- ories that examine the experiences of marginalized individuals and
  • 125.
    unequal distributions ofpower. This approach tends to emphasize col- laborative research processes to avoid perpetuating power imbalances (Creswell, 2014). These approaches look to restructure the status quo, with the goal of social change. Critical designs may utilize nonhierarchi- cal methodologies that aim to involve participants as co- researchers on investigating a problem and implementing change, such as participatory action research. More widely, critical researchers also cite this paradigm as a way of interpreting results. The pragmatic paradigm emphasizes that researchers choose the meth- ods, processes, and tools that best answer the research question at hand (Creswell, 2014). Pragmatic paradigms are most commonly associated with mixed-methods research. NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 13 Sam is interested in engaging in-depth with student voices and expe- riences, to understand how their experiences on campus are informed by
  • 126.
    their interactions withothers, their daily lives, and their social identities. As such, Sam identifies that their research is rooted in a constructivist paradigm that prioritizes the context of diverse groups of students to learn more about their experiences and perspectives. Crafting Questions Qualitative data can provide a great deal of information, some of which may be beyond the scope and nature of what the researcher wants to investigate. Like research paradigms, crafting a research question(s) helps to constrain the scope of a study. Research questions provide guidance for one’s inquiry and require a response that emerges from data and analysis. When a study becomes overwhelming, it is important to remember that a primary goal is to answer the research question(s). Good research questions stem from the purpose of the study. Consider whether the research purpose is to describe a phenomenon or explain and theorize about it (Marshall & Rossman, 2006). Is it to explore a problem that has not been previously examined or to em- power others and create greater equity (Marshall & Rossman, 2006)? An- swering these can help determine how to craft the research question(s). The methodology is another way to help develop the research question(s). For
  • 127.
    example, an ethnographicstudy often incorporates a question about cul- ture. Similarly, a theoretical/conceptual framework may also influence the nature of the question(s). Qualitative research questions are distinct from quantitative research questions in that they tend to ask: How? and/or What? Qualitative research questions often do not begin with “Why?” because this tends to be driven by cause and effect or a quantitative purpose. It is important that qualitative research questions cannot be answered with a simple yes, no, or one-word discrete answer. They should balance breadth and specificity. For example, a researcher may want to ask a question that will solve a major problem on campus. However, given the complexity of that problem, the study may not be able to solve it. Instead, ask questions that engage the larger problem by contributing to its solution or that contribute to a better understanding of the problem. The question(s) should be feasible and researchable given one’s resources, skills, and knowledge (Lawrence-Lightfoot, 2016). As with other parts of qualitative inquiry, the development of a research question can also be an iterative process. In fact, Stage and Manning (2016) state, “Rarely is a research question as clear in the beginning of the study as it is
  • 128.
    at the end”(p. 8). Therefore, researchers can change or revise the research question (or add subquestions) as the study progresses and the data emerge. Sam asks two research questions: 1. How do Students of Color experience microaggressions on campus? NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir 14 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE 2. What impact do Students of Color perceive microaggressions have on their college experience? The first question allows for the collection of data that describes occur- rences of microaggressions toward Students of Color and focuses on these students’ lived experience. Although qualitative data cannot produce “cause and effect” findings, they can elucidate the perceived impact of an action. The second question will lead Sam to collect data that describes the way that Students of Color feel affected by microaggressions to demonstrate the severity of the problem and inform campus interventions.
  • 129.
    Overview of Methodologies Methodologiesdemonstrate branches of knowledge and strategies of inquiry that influence research choices (Patton, 2015). They are the guideposts that help a researcher ground a study and shape additional components of the research design. Although some studies claim a generic qualitative approach without selecting a methodology, thinking systemat- ically about methodology can help researchers to align research questions, data collection processes, and data analyses (Patton, 2015). There are many different qualitative methodologies, but here we have selected four of the more common in higher education research: case study, ethnography, grounded theory, and narrative inquiry. Case Study. Case study is an appropriate method when the re- searcher wants to explore contextual conditions that might be critical to the phenomenon of study (Yin, 2003). Within this approach, it is essential to define the boundaries of a case, which are set in terms of time, place, events, and/or processes (Merriam, 1998; Yin, 2003). The case (also de- scribed as a bounded system or unit of analysis) is the focus of the study (Merriam, 2009). Case study researchers utilize several sources of informa- tion in data collection to provide in-depth description and
  • 130.
    explanation of the case(Merriam, 2009). Research can be comprised of a single case or multiple cases that are analyzed and/or compared. There are different types of case studies. For example, a descriptive case study generates a rich, thick, and detailed account that conveys understanding and explanation of a phe- nomenon (Merriam, 1998). Interpretive case studies go beyond describing the phenomena to present data that support, challenge, or expand existing theories (Merriam, 2009). Finally, exploratory case studies help to deter- mine the feasibility of a research project and solidify research questions and processes (Yin, 2003). Ethnography. Situated within the field of anthropology, ethnog- raphers seek to understand and describe cultural and/or social groups (Spradley, 1979). Ethnographic studies examine individuals and groups in- teracting in ordinary settings and attempt to discern pervasive patterns such as life cycles, events, and cultural themes. Ethnography describes a culture- NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 15
  • 131.
    sharing group, usesthemes or perspectives of the culture- sharing group for organizational analysis, and seeks interpretation of the culture-sharing group for meanings of social interaction (Spradley, 1979). Ethnography as- sumes that the principal research interest is largely affected by community cultural understandings. Thus, “ethnographies recreate for the reader the shared beliefs, practices, artifacts, folk knowledge, and behaviors of some group of people” (LeCompte, Preissle, & Tesch, 1993, pp. 2–3). Ethnogra- phy can be emic (focused on the perspectives of the group under study), etic (focused on the researcher/outsider perspective), or blend the two ap- proaches. The ethnographic process of inquiry suggests prolonged observa- tion within a natural setting and in-depth interviews. Ethnographic studies also define the researcher as a key instrument in the data collection process, who describes and interprets observations of the cultural group (Mertens, 2015). Grounded Theory. Grounded theory is an explanatory methodology developed to construct theory that emerges from and is grounded in data (Glaser & Strauss, 1967). Through this process, researchers can create a substantive theory, which is a working theory for a specific
  • 132.
    social process or context(Corbin & Strauss, 2008; Strauss & Corbin, 1998; Glaser & Strauss, 1967). Grounded theorists do not use theoretical frameworks and historically have sought to limit a priori knowledge of the problem being studied (Glaser & Strauss, 1967), but more recent approaches have em- phasized the need for sensitizing concepts, or ideas from extant literature, to provide a structure for inquiry (Charmaz, 2014). This allows for sub- stantive theory to be created inductively, from the data. Grounded theory is also defined by its sampling and data analysis procedures. Grounded theory researchers use theoretical sampling by selecting participants based on relevant constructs and participants’ experience with the phenomenon under study, rather than solely demographic criteria (Strauss & Corbin, 1998). Researchers should use data from their initial sample as a guide for recruiting additional participants to provide data to address emerging cate- gories (Charmaz, 2014; Corbin & Strauss, 2008; Strauss & Corbin, 1998). When new data from the sample no longer add to a category or concept, the study has reached theoretical saturation and the sampling process ends. Grounded theory is also known for the constant comparative method of analysis in which data are iteratively collected and compared to
  • 133.
    emerging categories through acoding process (Strauss & Corbin, 1998). The con- stant comparative method will be further explained in the Data Analysis section. Narrative Inquiry. Narrative inquiry centers on telling a story or stories and thus “takes as its object of investigation the story itself” (Riessman, 1993, p. 1). Researchers using this methodology organize the narrative of a single participant or narratives of multiple participants to share, shape, and connect their experiences (Chase, 2011). Chronology and timeline are central features of narrative inquiry (although narratives NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir 16 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE themselves do not need to follow a linear story). In addition, this method- ological approach often involves multiple, in-depth interviews and/or other data such as existing documents, and necessitates a reflexive relationship between researchers and their participants in order to re-tell stories through empirical findings (Chase, 2011). Data collection
  • 134.
    methods for this approachshould allow for telling by the participant(s), interpretation of the experience(s) by the researcher, representation of the story or stories, and reflection on assumptions made about the self while engaging in telling and re-telling the narratives (Jackson & Mazzei, 2013). There are many forms of narrative inquiry, including oral histories, biographies, testimonies, and memoirs. Given Sam’s interest in focusing on the voices of Students of Color regarding microaggressions, they select narrative inquiry. This methodology can use participants’ stories to expose oppressive actions (Chase, 2011). Narrative inquiry will shape the study’s emphasis on exam- ining students’ experiences with microaggressions throughout their time at the university and in eliciting specific examples or stories, related to those experiences. Tools for Data Collection The main types of data collection in qualitative research include partici- pant observation, individual interviews, and focus groups (Guest, Namey, & Mitchell, 2013). The research questions and methodologies may lead to- ward a certain type of data collection, or a study that combines multiple approaches to gather data (multimodal design). All three
  • 135.
    approaches re- quire someinitial planning beyond crafting questions to include establish- ing a location, obtaining any necessary tools prior to implementation (e.g., recording devices), and dedicating time immediately afterward to process through initial reflections and analysis (Guest et al., 2013). Observations. Observations are typically the result of the re- searcher’s experiences in a given situation or environment. As opposed to direct observation, like the detail recovered by a video camera or a two- way mirror, participant observation includes the researcher as a part of the environment directly absorbing and processing information (Guest et al., 2013). Researchers are engaged in the environment by taking notes, record- ing their environment, and asking questions to uncover meaning (Guest et al., 2013). This form of data collection is used to discover complex inter- actions in social settings (Marshall & Rossman, 2006). By being in a space where the topic of interest occurs, researchers record the behavior of inter- est as it happens and to provide context (Merriam, 2009). The degree of what a researcher can observe may be determined by the relationships they have in the community, the access they negotiate, and the amount of time spent gathering data (Guest et al., 2013).
  • 136.
    In observations, thegoal of the researcher is to record field notes with a high degree of detail. These notes involve physical surroundings, NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 17 context, people, and their actions (Neuman, 2006). Prior to beginning observations, the researcher should choose an organizational system that will allow for tracking direct observations with inferences, analysis, and personal journaling (Neuman, 2006). Although many of these notes are conducted during the observation, the researcher should also budget time shortly after finishing the observation to jot down additional notes. The time after observation may be used to create analytic memos in which to record plans, reflect on ethical decisions, and create maps or diagrams of occurrences or relationships (Neuman, 2006). Although observations may involve a large time commitment of many hours, as a form of data collec- tion they allow for a researcher to engage directly with human behavior, particularly of which participants are less aware or able to
  • 137.
    discuss. Interviews. The mostpopular form of data collection, individual in- terviews use open-ended questions to learn about participants’ experiences, memories, reflections, and opinions (Magnusson & Marecek, 2015). Differ- ent types of interviews allow researchers to incorporate varying degrees of flexibility as desired by their paradigm, methodology, and style. There are four interview types (Rossman and Rallis 2017; adapted from Patton, 2015): (a) informal interviews in a casual setting, often recorded through field notes; (b) a guided interview guide approach, with preset categories and topics but flexibility to address emerging topics; (c) a standardized open- ended interview with a set order of fixed questions; and (d) true conversa- tions in the form of dialogic interviews. The goal of an interview is to gain rich, in-depth, personal experiences that relate directly to the research topic (Magnusson & Marecek, 2015). To conduct an interview, a researcher should have “superb listening skills and be skillful at personal interaction, question framing, and gentle probing for elaboration” (Marshall & Rossman, 2006). Guest and colleagues (2013) recommend using interviews to gain in-depth insight,
  • 138.
    explore new topics,and gain information about potentially sensitive or polarizing topics. In approaching interviews, they provide the following suggestions: Schedule interviews at times that are mutually convenient, with an empha- sis on the interviewee’s preferences Allot around 45–90 minutes for an in-depth interview Pilot the interview protocol prior to implementation to ensure effectiveness Plan ahead for what kind of data will be needed during analysis. This can include summaries of the conversation, expanded interview notes, au- dio/video recordings, and verbatim transcripts. Although these suggestions provide an initial framework, all decisions around interviews are contingent on an understanding of the participants and topic under study. NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir 18 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE Focus Groups. For researchers interested in understanding how in-
  • 139.
    dividuals discuss atopic collectively, focus groups can save time and money while gathering rich data. Focus groups tend to be most useful to gain in- formation on group norms and processes, opinions and perspectives, reac- tions and responses, and brainstorming (Guest et al., 2013). Because focus groups allow the researcher to see real-time responses, they provide bene- ficial opportunities to view how individuals agree, disagree, or respond to one another. A key benefit of focus groups is their assumption that an indi- vidual’s attitudes and beliefs do not form in a vacuum; participants develop their opinions and understandings by engaging with others (Marshall & Rossman, 2006). The ideal group contains approximately 7–10 individuals that are ide- ally strangers, to encourage varying viewpoints (Rossman & Rallis, 2017). Utilizing strangers also helps to decrease social desirability bias that can oc- cur in interview settings to respond or behave in a certain way. Depending on the study, researchers could choose to recruit homogenous or hetero- geneous groups of participants (Mertens, 2015). As focus groups include multiple moving pieces, they rely greatly on the skill of the facilitator to keep the conversation on track, ask appropriate probes, and ensure a bal-
  • 140.
    ance of voices.Interview protocols should establish ground rules prior to beginning, prioritize key questions to allow for as much fluidity in the con- versation as possible, and create a limited time commitment (Guest et al., 2013). For their study, Sam decides to do individual interviews to understand how Students of Color describe microaggressions and their manifestations within the context of their overall college experience. Sam chooses inter- views because microaggressions can be a sensitive topic for individuals to share in a focus group, and there is no clear context in which Sam could conduct observations of this behavior. They choose a standardized open- ended interview with questions that include 1. In thinking about the past week, can you describe any microag- gressions you have encountered and the context in which they occurred? 2. How would you describe the impact of these microaggressions on your overall student experience? Sam prepares prompts for the interview questions and pilots the inter- view protocol with several colleagues who identify as People of Color before determining that the interviews will last around an hour each.
  • 141.
    Data Analysis Although thereare numerous qualitative data analysis techniques, they all share at least three common characteristics. First, the qualitative data NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 19 analysis process often begins during data collection. Thus, the analytic process is considered iterative or non linear (Creswell, 2014). A researcher may collect data and engage in early analysis only to realize that more data are needed to understand the participants’ experiences. Even when formal data analysis does not begin while data collection is ongoing, qualitative researchers often use memos to document emerging ideas and patterns, which form the basis for subsequent analysis. Initial data analysis that occurs during data collection can also allow researchers to consider whether they are obtaining the type and quality of information they intended. Second, a major goal of qualitative data analysis is data reduction
  • 142.
    (Creswell, 2014). Qualitativeresearch can produce large amounts of data and the analytic process works to reduce the volume of information by identifying major patterns and themes within it. Researchers can engage this process on their own, in teams, and/or using computer- assisted qualitative data analysis software (CAQDA) such as NVIVO (see Bazeley & Jackson, 2013) or Atlas.ti. Third, the process is immersive, meaning that it requires a high level of engagement with the data. This can include reading and rereading interview transcripts multiple times to exhaust exploration of the data. During this process, researchers often write memos that help to document initial interpretations of the data as well as engage in reflexivity (e.g., processing how one’s background, biases, and perspectives may influence the analytic process) (Lincoln & Guba, 1985). These memos can be used as part of one’s audit trail, which is a record of research steps that helps to ensure data quality and transparency (Lincoln & Guba, 1985). One popular analytical tool is the constant comparative method. Al- though grounded theorists developed this method, it is commonly used as a general tool for analyzing data and is useful for those learning how to engage in qualitative analysis because it provides a specific
  • 143.
    three-phase process. This processis known as coding, in which short words or phrases are used to “assign a summative, salient, essence-capturing, and/or evoca- tive attribute for a portion of language-based or visual data” (Saldaña, 2013, p. 3). Codes can reflect activities, relationships, roles, processes, emotions, perspectives, and other units of social organization. The constant compar- ative method begins with open coding words, lines, several sentences, or paragraphs of data. Open coding can be deductive and/or inductive (Strauss & Corbin, 1998). Deductive codes stem from borrowed concepts such as components of the theoretical framework or key themes from relevant lit- erature. Inductive or in vivo codes are emergent from the data. Induc- tive coding can be developed from data that “strike as interesting, poten- tially relevant, or important to the study . . . for answering the research ques- tions” (Merriam, 2009, p. 178). Whether the open codes are deductive or inductive, it is important to identify the codes with names and definitions clearly (Miles & Huberman, 2005). The next stage in the constant comparative method is axial coding, which is performed iteratively during the open coding process and also
  • 144.
    NEW DIRECTIONS FORINSTITUTIONAL RESEARCH • DOI: 10.1002/ir 20 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE after open codes are developed. This stage begins the reduction process and includes comparing and connecting emerging codes into categories (Strauss & Corbin, 1998). Categories are “conceptual elements that cover or span many individual examples or codes previously identified” (Merriam, 2009, p. 181). For example, while a researcher may have 100 open codes, the re- searcher might reduce these codes into 20 categories. One can do this by grouping together data by related open codes to reassemble the data and demonstrate recurrent patterns and themes (Strauss & Corbin, 1998). The axial coding process is also useful for separating data that are essential to the purpose of the study from data that fall outside the scope of the research pur- pose and question(s). The final phase of the constant comparative approach is selective coding; however, some researchers will only perform open and axial coding, particularly for exploratory studies. During the selective cod- ing process, the researcher pulls together themes to develop a storyline and
  • 145.
    identify a corecategory (Strauss & Corbin, 1998). The core category “is the central defining aspect of the phenomenon to which all other categories and hypotheses are related or interconnect” (Merriam, 2009, p. 200). For example, moving from 20 categories to potentially one to five overarching themes. This reflects the primary narrative emerging across the data that provides a response to the research question(s). Sam considers the constant comparative approach, but instead chooses an analytic approach that stems from narrative inquiry. This involves four phases: (a) initial reading of transcripts to indicate general themes and con- sider how each part contributes to the whole story; (b) rereading the tran- scripts to view whether there are multiple narratives present and to con- sider the structure, content, and larger contexts involved; (c) investigate the patterns emerging which includes how the whole story and its parts are told; and (d) engage the literature/theoretical framework with the par- ticipants’ narrative(s) to glean a more in-depth understanding of the story (Josselson, 2011). Research Quality Although quantitative inquiry strives for reliability and validity, in qualita-
  • 146.
    tive research, trustworthinessis the predominant standard of research qual- ity (Guba & Lincoln, 1989; Lincoln & Guba, 1985). Trustworthiness can be established in multiple ways. One is by producing work that is transferable, or that provides enough context for readers to infer similar results in their own context (Krefting, 1999; Lincoln & Guba, 1985). This can be done by providing detailed documentation of data collection and analysis proce- dures as well as by using thick, rich description of participants’ experiences (Krefting, 1999; Lincoln & Guba, 1985). One goal of qualitative research is credibility or having data that accurately reflects the phenomenon (Kreft- ing, 1999; Lincoln & Guba, 1985). Fostering credibility can begin during the data collection phase with prolonged engagement with participants. NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 21 Another tool is member checking, which involves testing the interpretations of the data with study participants by sharing initial data analysis for their feedback (Krefting, 1999; Lincoln & Guba, 1985). Peer
  • 147.
    debriefing requires meeting withan individual who is unaffiliated with the research (disinter- ested peers) and can give honest feedback (equal power dynamic) about the plausibility of data interpretations. Additionally, triangulation can be built into the research design to produce divergent constructions of reality (Lincoln & Guba, 1985). For example, one can engage methodological tri- angulation through use of multiple forms of data collection (interviews, par- ticipant observation) or data triangulation through multiple data sources. Triangulation can also be performed through the involvement of multiple researchers or analyst triangulation) or during data analysis through the use of multiple theoretical frames (theory/perspective triangulation) (Patton, 2015). Triangulation can establish confirmability to ensure that findings are shaped more by study participants than by researcher biases. Reflexive processes such as journaling, engaging in dialogue with other researchers, and naming one’s positionality (e.g., relationship between researcher and participants/study topic) within the write-up of the study can develop con- firmability. Lastly, trustworthy studies should be dependable, or demon- strate consistent findings that could be repeated (Lincoln & Guba 1985). To establish dependability (and confirmability), researchers can
  • 148.
    create an audit trailthat documents the steps and processes they engaged in during the qualitative investigation. Sam selects multiple strategies to increase the trustworthiness of the study. One is member checking. Sam sends each of the participants their transcript with initial interpretations and questions. After giving the partic- ipant time to review the transcript and notes, Sam calls each participant to briefly ensure that the interpretations reflect the participants’ meaning and to clarify any questions about the narratives. Another is by using thick, rich description by including direct quotes from participants in the final write- up of the study. Lastly, Sam engages in peer debriefing with an institutional researcher in the office. This individual is not involved in the study, but is a Person of Color who graduated from a predominantly white institution 3 years prior. Conclusion Qualitative research provides an important opportunity to engage with par- ticipants’ experiences through their own voices and behaviors. Unlike quan- titative methodologies, qualitative approaches view the researcher as the instrument through which data are collected (Patton, 2015). As
  • 149.
    such, in- tentional engagementthroughout each step of the research process is cru- cial to ensure a well-aligned, accurate, and ethical design. Successful use of qualitative methodologies fosters opportunities for institutional researchers to pursue new questions and experiences within their work (McLaughlin NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir 22 USING QUALITATIVE RESEARCH TO PROMOTE ORGANIZATIONAL INTELLIGENCE et al., 2001). The rest of the volume continues to look at specific con- texts and considerations in which qualitative research can aid institutional research. References Bazeley, P., & Jackson, K. (2013). Qualitative data analysis with NVIVO. Thousand Oaks, CA: Sage. Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Thousand Oaks, CA: Sage. Chase, S. E. (2011). Narrative inquiry: Still a field in the making. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research
  • 150.
    (4th ed., pp.421–434). Thousand Oaks, CA: Sage. Corbin, J., & Strauss, A. (2008). The basics of qualitative research: Techniques and proce- dures for developing grounded theory. Thousand Oaks, CA: Sage. Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed approaches (4th ed.). Thousand Oaks, CA: Sage. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York, NY: Aldine. Guba, E. G., & Lincoln, Y. S. (1989). Fourth-generation evaluation. Newbury Park, CA: Sage. Guest, G., Namey, E. M., & Mitchell, M. L. (2013). Collecting qualitative data: A field manual for applied research. Thousand Oaks, CA: Sage. Guido, F. M., Chávez, A. F., & Lincoln, Y. S. (2010). Underlying paradigms in student affairs research and practice. Journal of Student Affairs Research and Practice, 47(1), 1–22. https://doi.org/10.2202/1949-6605.66017 Harper, S. R., & Kuh, G. D. (2007). Myths and misconceptions about using quali- tative methods in assessment. New Directions for Institutional Research, 136, 5–14. https://doi.org/10.1002/ir.227
  • 151.
    Jackson, A., &Mazzei, L. (2013). Plugging one text into another: Thinking with theory in qualitative research. Qualitative Inquiry, 19(4), 261–271. Josselson, R. (2011). Narrative research: Constructing, deconstructing, and reconstruct- ing story. In F. J. Wertz, K. Charmaz, L. M. McMullen, R. Josselson, R. Anderson, & E. McSpadden (Eds.), Five ways of doing qualitative analysis: Phenomenological psychol- ogy, grounded theory, discourse analysis, narrative research, and intuitive inquiry (pp. 224–242). New York, NY: The Guilford Press. Krefting, L. (1999). Rigor in qualitative research: The assessment of trustworthiness. In A. Miliniki (Ed.), Cases in qualitative research: Research reports for discussion and evaluation (pp. 173–181). Los Angeles, CA: Puscale. Lawrence-Lightfoot, S. (2016). Portraiture methodology: Blending art and science. Learning Landscapes, 9(2), 19–27. LeCompte, M. D., Preissle, J., & Tesch, R. (1993) Ethnography and qualitative design in educational research (2nd ed.). San Diego, CA: Academic Press. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage. Magnusson, E., & Marecek, J. (2015). Doing interview-based qualitative research: A learner’s guide. Cambridge, UK: Cambridge University Press. Marshall, C., & Rossman, G. B. (2006). Designing qualitative research (4th ed.). Thou-
  • 152.
    sand Oaks, CA:Sage. McLaughlin, J. S., McLaughlin, G. W., & Muffo, J. A. (2001). Using qualitative and quantitative methods for complementary purposes: A case study. New Directions for Institutional Research, 112, 15–44. https://doi.org/10.1002/ir.26 Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass. NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir https://doi.org/10.2202/1949-6605.66017 https://doi.org/10.1002/ir.227 https://doi.org/10.1002/ir.26 A QUALITATIVE TOOLKIT FOR INSTITUTIONAL RESEARCH 23 Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass. Mertens, D. M. (2015). Research and evaluation in education and psychology: Integrating Diversity with quantitative, qualitative, and mixed methods (4th ed.). Los Angeles, CA: Sage. Miles, M. B., & Huberman, A. M. (2005). Qualitative data analysis. Thousand Oaks, CA:
  • 153.
    Sage. Morgan, D. L.(2007). Paradigms lost and pragmatism regained: Methodological impli- cations of combining qualitative and quantitative methods. Journal of Mixed Methods Research, 1(1), 48–76. https://doi.org/10.1177/2345678906292462 Neuman, W. L. (2006). Social research methods: Qualitative and quantitative approaches (6th ed.). Boston, MA: Pearson. Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice (4th ed.). Thousand Oaks, CA: Sage. Riessman, C. K. (1993). Narrative analysis. Newbury Park, CA: Sage. Rossman, G. B., & Rallis, S. F. (2017). An introduction to qualitative research: Learning in the field (4th ed.). Los Angeles, CA: Sage. Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Thousand Oaks, CA: Sage. Spradley, J. P. (1979). The ethnographic interview. New York, NY: Holt, Rinehart and Winston. Stage, F. K., & Manning, K. (Eds.). (2016). Research in the college context: Approaches and methods (2nd ed.). New York, NY: Routledge. Strauss, A., & Corbin, J. (1998). Basics of qualitative research:
  • 154.
    Techniques and procedures fordeveloping grounded theory. Thousand Oaks, CA: Sage. Van Note Chism, N., & Banta, T. W. (2007). Enhancing institutional assessment efforts through qualitative methods. New Directions for Institutional Research, 136, 15–28. https://doi.org/10.1002/ir.228 Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage. CHRYSTAL A. GEORGE MWANGI is an assistant professor of higher education at the University of Massachusetts Amherst. GENIA M. BETTENCOURT is a doctoral candidate in higher education at the Uni- versity of Massachusetts Amherst. NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH • DOI: 10.1002/ir https://doi.org/10.1177/2345678906292462 https://doi.org/10.1002/ir.228 Copyright of New Directions for Institutional Research is the property of John Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
  • 155.
    1 A QualitativeToolkit for Institutional ResearchParadigmsCrafting QuestionsOverview of MethodologiesCase StudyEthnographyGrounded TheoryNarrative InquiryTools for Data CollectionObservationsInterviewsFocus GroupsData AnalysisResearch QualityConclusionReferences Research in Higher Education, Vol. 36, No. 5, 1995 ASSUMPTIONS UNDERLYING QUANTITATIVE AND QUALITATIVE RESEARCH: Implications for Institutional Research Russel S. Hathaway . . . . . ~ . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . . . . . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . o ~ 1 7 6 1 7 6 1 7 6 . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . o . . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . . . . . ~ . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . ~ 1 7 6 1 7 6 . . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 For institutional researchers, the choice to use a quantitative or qualitative approach to research is dictated by time, money, resources, and staff. Frequently, the choice to use one or the other approach is made at the method level. Choices made at this level generally have rigor, but ignore the underlying philosophical assumptions struc- turing beliefs about methodology, knowledge, and reality. When choosing a method, institutional researchers also choose what they believe to be
  • 156.
    knowledge, reality, and thecorrect method to measure both. The purpose of this paper is to clarify and explore the assumptions underlying quantitative and qualitative research. The rea- son for highlighting the assumptions is to increase the general level of understanding and appreciation of epistemological issues in institutional research. Articulation of these assumptions should foster greater awareness of the appropriateness of differ- ent kinds of knowledge for different purposes. . . . . . . . . . . . . . . ~ 1 7 6 1 7 6 . . . . . . . . ~ 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . . . . . ~ 1 7 6 1 7 6 1 7 6 . . . . . . . . . . . . . . . . . . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . . . . o . . , , ~ 1 7 6 1 7 6 1 7 6 1 7 6 . . . . . . . . . . , , ~ 1 7 6 . . . . . . . . . . There are few subjects that generate as much passion among scientists as arguments over method. (Shulman, 1981, p. 5) Institutional researchers are continually involved with implementing research agendas for various campus constituencies. Institutional research offices pro- vide important technical and informational support for central decision makers in higher education by engaging in research-oriented activities such as tracking enrollment patterns, surveying incoming students, documenting faculty work- loads, and assessing staff job satisfaction. Research methods that institutional
  • 157.
    researchers employ rangefrom basic quantitative statistical analyses to inter- views and case studies (Bohannon, 1988; Bunda, 1991; Fetterman, 1991; Hinkle, McLaughlin, and Austin, 1988; Jennings and Young, 1988; Sherman and Webb, 1988; Tierney, 1991). Some institutional researchers advocate inte- grating quantitative and qualitative approaches to institutional research (Mar- Russel S. Hathaway, 4216D School of Education Building, The University of Michigan, Ann Arbor, MI 48109-1259. 5 3 5 0 3 6 1 - 0 3 6 5 / 9 5 / 1 0 0 0 - 0 5 3 5 5 0 7 . 5 0 / 0 �9 t 9 9 5 H u m a n S c i e n c e s P r e s s . I n c . 536 HATHAWAY shall, Lincoln, and Austin, 1991; Peterson, 1985a). Often, the driving forces behind the choice of methods are time, money, resources, staff, and those re- questing the study. The choice to use a quantitative approach (e.g., survey and statistical analysis of responses) versus a qualitative approach (e.g., transcrip- tion analysis of interviews) is generally decided at the level of methods. Al- though the choice of methods is often a difficult one, institutional researchers
  • 158.
    generally make thedecision with relative ease, choosing the method that will garner the information they seek. However, they often make their decisions without giving much thought to the assumptions underlying research methods. Over the past decade, educational researchers have been engaged in an ongo- ing polemic concerning quantitative and qualitative research. They have been arguing over philosophical commensurability,' the concern that qualitative re- search has been seen as a methodological variation of quantitative research, and whether researchers should combine quantitative and qualitative research meth- ods when pursuing research interests (Donmoyer, 1985; Eisner, 1981, 1983; Firestone, 1987; Howe, 1985, 1988; Shulman, 1981). Although the intricate details of this debate are not of paramount concern for institutional researchers, the general discourse over the fundamental philosophi- cal grounds guiding research methods is relevant. Some of those involved in the debate argue that the choice to use a quantitative or qualitative research ap- proach should not be made at the method level (Guba and Lincoln, 1981).'- This concern has direct relevance for those making methodological choices in an applied field such as institutional research. The decision to use quantitative or qualitative methods is replete with assumptions concerning the
  • 159.
    nature of knowl- edgeand reality, how one understands knowledge and reality, and the process of acquiring knowledge and knowledge about reality. When one chooses a par- ticular research approach, one makes certain assumptions concerning knowl- edge, reality, and the researcher's role. These assumptions shape the research endeavor, from the methodology employed to the type of questions asked. When institutional researchers make the choice between quantitative or quali- tative research methods, they tacitly assume a structure of knowledge, an under- standing and perception of reality, and a researcher's role. The purpose of this paper is to clarify and explore the underlying assumptions contained within quantitative and qualitative research. It is important for institutional research- ers to understand the philosophical grounding of the two approaches so that they may reflect on those assumptions while engaging in institutional research. In addition, understanding the philosophical grounding also highlights the strengths and weaknesses of both approaches. The reason for contrasting the two paradigms is to increase the general level of understanding and apprecia- tion of epistemological issues in the institutional research profession. Articula- tion of the epistemological differences should foster greater awareness of the
  • 160.
    appropriateness of differentkinds of knowledge for different purposes; it may INSTITUTIONAL RESEARCH 537 thereby help legitimate the adoption of alternative and more appropriate knowl- edge-yielding paradigms in institutional research. It should also help reduce conflicts within the field by justifying and providing a basis for tolerance of diversity and multiplicity in research design. Greater epistemological appreciation seems to be an essential prerequisite to developing an appropriate inquiry approach whereby researchers would explic- itly select a mode of inquiry to fit the nature of the problematic situation under study, the state of knowledge, and their own skills, style, and purpose (Don- moyer, 1985; Smith, 1983a). In addition, appreciation of epistemological issues has implications for the evaluation of institutional research products. It leads to a belief that the quality of a piece of research is more critically indicated by the appropriateness of the paradigm selected than by the mere technical correctness of the methods used (Donmoyer, 1985; Eisner, 1981; Herriott and Firestone, 1983; Smith 1983a). The debate that has been going on among educational
  • 161.
    researchers will be highlightedin brief, emphasizing the major points that have been raised by proponents of quantitative and qualitative research, as well as the arguments for those who advocate the combination of the two approaches. This debate will be used as a stepping stone into a discussion of the underlying philosophies of quantitative and qualitative research. An example program review will be used to describe how the two approaches might structure the program investigation and evaluation. This example is not meant to represent an ideal, or even typical, method of conducting a review, but rather to provide a vivid sense of the dis- tinctions between the two approaches. Finally, differences between the para- digms will be identified and discussed followed by a conclusion highlighting implications for conducting inquiry in institutional research. DEBATES OVER DISCIPLINED INQUIRY Early Debate The educational research community is engaged in a heated debate over quantitative and qualitative approaches to disciplined inquiry. The crux of the debate centers on the incommensurability of the underlying assumptions struc- turing the approaches (Donmoyer, 1985; Eisner, 1981; Firestone, 1987; Howe, 1985). 3 This debate in educational research, however, followed
  • 162.
    a crisis inthe social sciences concerning identical philosophical issues (Bemstein, 1976, 1983; Rabinow and Sullivan, 1987). Bernstein (1976) has provided one of the most comprehensive summaries of the history of the social science debates, as well as a rich description of the various research paradigms that were, and still are, being discussed. The debate over quantitative and qualitative research arose out of the social and political unrest of the 1960s during which the foundations of the social 538 HATHAWAY disciplines came under radical criticism (Bernstein, 1976; Rabinow and Sul- livan, 1987). Bernstein argues that these critiques came at a time when the social disciplines had arrived at a tentative agreement on an empirical founda- tion where they could begin a stable expansion of the scientific knowledge of society. Critics argued, and continue to argue, that the foundations of the social sciences were replete with weakness; that what was believed to be objective scientific knowledge was a veiled form of ideology that supported the status quo (Bernstein, 1976; Gordon, Miller, and Rollock, 1990; Stanfield, 1985).
  • 163.
    Others argued thatthe social sciences did not provide the critical perspectives on what was happening in society, nor did they provide solutions to the prob- lems they set out to solve (Bernstein, 1976). The belief that a rational, system- atic study of societal problems would result in policies and programs that would address them was doubted (Bernstein, 1976). As the social sciences began to experience profound criticism and self-doubt, newly discussed approaches arose to rescue social science research from the depths of angst. Bernstein (1976) argues that linguistic philosophical inquiries were used to challenge the epistemological foundations of the social sciences. Phenomenology and hermeneutics also became more welcome in social scien- tific circles. These disciplines, often characterized as soft and subjective by empirical researchers, were perceived as panaceas for the ills facing social re- search (Bernstein, 1976). Advocates of phenomenology and hermeneutics be- lieved that these approaches could provide elucidative insight into social pro- cesses that was not being acquired with empirical inquiry methods (Bernstein, 1976). The literature produced in this period concerning the nature of research can best be described as muddled. Bernstein (1976) reports that there was no agree-
  • 164.
    ment during the1960s and 1970s about what were provable results, what were the proper research methods, what were important problems to address, or what were "the most promising theoretical approaches" in the study of social science. It was during this confusing period that the educational community began ques- tioning its approaches to disciplined inquiry? Educational Debate Closely following what could be called the "angst" period in social science research, the educational research community began to experience a similar debate, beginning in the late 1970s and continuing today (Donmoyer, 1985; Eisner, 1981, 1983; Firestone, 1987; Garrison, 1986; Giarelli and Chambliss, 1988; Howe, 1985, 1988; Hutchinson, 1988; Lincoln and Guba, 1985; Mar- shall, Lincoln, and Austin, 1991; Sherman and Webb, 1988). Throughout thi, period, educational researchers have engaged in a heated debate over the degret to which quantitative and qualitative methods can be combined. This discours, INSTITUTIONAL RESEARCH 539 revolved around defining different facets of qualitative and quantitative re- search along with the debate focusing on the pros and cons o f
  • 165.
    combining the two approaches.In general, researchers fall into three perspectives in this dis- cussion: the purists, the situationalists, and the pragmatists (Rossman and Wilson, 1985). The purists would not entertain the notion o f discussing combin- ing the two approaches. Educational researchers within this perspective focus on the incommensurability between the two approaches and argue that the phi- losophies grounding the two approaches are so divergent in terms of assump- tions about the world, truth, and reality that one should not even consider com- bining quantitative and qualitative research (Guba, 1987; Smith, 1983a, 1983b; Smith and Heshusius, 1986). The concern is that by combining approaches, researchers neglect to acknowledge that the different approaches make vastly different assumptions concerning knowledge and reality. Others have discussed the problems involved in ignoring the issue of underlying assumptions and fo- cusing only on the benefits of combining both approaches (Donmoyer, 1985). In contrast, those falling within the situationalist perspective focus on the level of method and argue "that certain methods are most appropriate for spe- cific situations" (Rossman and Wilson, 1985, p. 630). The choice o f method for the situationalist is partially determined by the questions to be answered. Fur-
  • 166.
    thermore, situationalist researchersalso alternate between qualitative and quan- titative methods as they engage the research process (Rossman and Wilson, 1985). In other words, researchers adhering to this perspective may use a sur- vey to generate information that could assist in the development o f an interview protocol. For the pragmatist, quantitative and qualitative methods are viewed as capa- ble of informing one another throughout the research process. In contrast to the situationalist, who alters between the two approaches, the pragmatist views the two approaches capable of simultaneously bringing to bear both o f their strengths to answer a research question. Using interviews, surveys, question- naires, and observation techniques within one study is as an example o f a prag- matic approach to integrating or combining research methods. Institutional Research Debate Throughout the past decade of discussion, the educational research debate has highlighted many of the strengths and weaknesses of quantitative and quali- tative research, and it has brought to light the philosophies underlying the two approaches. For institutional researchers, the debate follows closely on the heels of the evolution of the profession, a profession that has slowly moved from
  • 167.
    engaging primarily indescriptive quantitative studies in the 1970s and 1980s (Peterson, 1985a, 1985b) to more multimethod studies in the 1990s (Peterson and Spencer, 1993). Institutional researchers have engaged in debates similar to 540 HATHAWAY those of educational researchers, but not to the same extent. Institutional re- searchers have primarily discussed quantitative and qualitative differences at the method level (Bohannon, 1988; Fetterman, 1991; Fincher, 1985; Hinkle, McLaughlin, and Austin, 1988, Jennings and Young, 1988; Marshall, Lincoln, and Austin, 1991; Tierney, 1991) and how different methodologies yield differ- ent information (Peterson and Spencer, 1993). At this level, institutional re- searchers have been attending, correctly so, to assumptions supporting specific statistical procedures (Bohannon, 1988; Yancey, 1988a, 1988b), such as having random selection when performing multiple regression, having a sample size larger than five in each cell of an ANOVA, or assuming a normal distribution. By attending to the assumptions at this level, institutional researchers have pro- duced studies that have rigorous application of methods. Institutional researchers advocating qualitative approaches have
  • 168.
    also appro- priately focusedattention on the assumptions guiding good qualitative research. Qualitative institutional researchers have tended to address data collection pro- cedures, such as having an interview protocol composed of nonleading ques- tions and accurate transcripts of observation or taped accounts (e.g., Miles and Huberman, 1984), but they fail to discuss or acknowledge the scientific philoso- phies (i.e., phenomenology, hermeneutics, positivism) in which they are grounded. Those using quantitative approaches also neglect to mention the philosophical grounds on which their approaches are based. It is important for institutional researchers to be cognizant of the philosophical assumptions guid- ing both quantitative and qualitative approaches. It is important not only be- cause it is good practice, but because institutional research is an applied field in which much o f what is done is used for policy decisions. These policy deci- sions, once implemented, make assumptions concerning the reality of campus life. These realities are defined, in part, by the underlying philosophies structur- ing the approach used by the institutional researcher. For example, a statistical analysis performed on survey results may describe certain aspects of the campus, but these aspects have been shaped by the people who developed the survey and may not reflect the reality as understood and
  • 169.
    experienced by those whoanswered the survey. PARADIGMS UNDERLYING QUANTITATIVE AND QUALITATIVE RESEARCH Defining Paradigm A framework is needed to discuss the differences between philosophies un- derlying the quantitative and qualitative research approaches. The distinction between the philosophies can be made by using the concept of paradigm. The work of a number of philosophers (Bernstein, 1976; Firestone, 1987; Gubrium, 1988; Kuhn, 1962, 1970, 1974) is quite useful in defining the idea of a para- INSTITUTIONAL RESEARCH 541 digm. Kuhn defines scientific paradigm as a theoretical framework, or a way of perceiving and understanding the world, that a group of scientists has adopted as their worldview. For Bernstein, the underlying paradigm dictates a level of generally unexamined common assumptions, attitudes, and expectations, and a framework within which inquiry operates. This paradigm guides a shared sense of what scientific inquiry is and could be the kind of reality being investigated,
  • 170.
    the proper formof inquiry, and a general orientation for perceiving and inter- preting the world (Bernstein, 1976). The paradigm from which one operates has consequences for views of the nature of empirical knowledge, the relations of theory and practice, the relations of fact and perception, and the education and role of the theorist (Bemstein, 1976). When a scientist observes a phenomenon and interprets what this observation means, that scientist is using a particular paradigm to give that observation meaning. Gubrium (1988) defines paradigm as a way of structuring everyday experience, a way of framing events, a sense of what is real and how to prove it, and an implicit stance on ontology and epistemology (i.e., being and knowing). This paradigm also influences the methods chosen (Firestone, 1987) and implies policy and action (Banks, 1988). In essence, scientific paradigms act as lenses through which scientists or researchers are able to perceive and understand the problems in their field and the scientific answers to those problems. Paradigms dictate what researchers consider data, what their role in the investigation will be, what they consider knowledge, how they view reality, and how they can access that reality. A scientific paradigm provides a group of scientists or researchers with a way of collectively making sense of their scientific world. It is this
  • 171.
    framework of ev- erydayassumptions about knowledge, reality, and the proper methodology that will be summarized in the remainder of this paper. Distinguishing the Paradigms Quantitative and qualitative approaches can both serve research purposes, but in different ways and with different effects. The ways in which they are used and the insights provided are a function of the underlying assumptions of the paradigms grounding the approaches. The attempt here is to systematically ar- ticulate the paradigms underlying both these approaches by direct description and by contrast with a recognizable alternative. The two paradigms will be contrasted on a number of dimensions (summarized in Table 1). Before discussing the differences between the two paradigms of inquiry, it may be useful to comment on the paradigms underlying the distinction between quantitative and qualitative research approaches. A review of the literature yields a wide array of distinctions between the philosophies undergirding the two approaches? Included among these are Geertz's (1973) distinction between thin and thick
  • 172.
    542 HATHAWAY TABLE 1.The Paradigm Framework Underlying the Two Approaches Empirical-Analytic Interpretive (quantitative) (qualitative) Methodology Begin with hypothesis of a rela- Formulate a question. tionship between cause & effect Identify sample (purposive). (program & outcome). "Fix" phenomenon (interview, Test the hypothesis, observe, tape record). Develop instruments. Narrative articulation and inter- Identify sample (random). pretation of themes. Measure/code phenomenon. Compare data types; integrate Aggregate data. material (as parts of a whole). Generalize: theory that has with- Hypothesis generating. stood this test. Write case descriptions. Researcher should have objective Generalize? stance. Researcher should have participa- Ontology (Reality) Epistemology (Knowledge) Public events (e.g., discussion, utterances, etc.) reflect a reality. Private perceptions (e.g., beliefs, perceptions, etc.). Subjects and objects exist sepa- rate from the perception of them. Objective events, subjective states.
  • 173.
    Governed by laws. Knowledge= objective reports of measured dimensions of the phenomenon. Compared against (tacit) norms ("skills"). Compared over time. Differences/no differences attrib- uted to hypothesized causal rela- tionship, to lack of validity of instruments, or to alternative causes. General statements of regularities among objective properties that are internally consistent and that correspond to the way things really are. tory stance. Public events. People have different aims, atti- tudes. People interact within, and can change, a local setting. Subjects and objects located in intersubjective communities. People construct and act within a context which structures and con- strains that activity. Knowledge = understanding of participants' aims, perspectives, assumptions: the terms in which their everyday life is grasped. Plus articulation of the local so- cial context of interaction.
  • 174.
    Description of specificcases (persons & communities): people employ interpretive schemes that must be understood before action can be described. Character of the local context must be articulated. INSTITUTIONAL RESEARCH 543 description; Hall's (1976) low context and high context; Pike's (1967) etic and emic; Kaplan's (1964) logic-in-use and reconstructed logic; Smith's (1983a) realist and idealist continuum; Smith and Heshusius's (1986) rationalist and naturalist distinction; Habermas's (1988) and Bemstein's (1976) empirical-ana- lytic and interpretive distinction; and the distinctions between acquaintance with and knowledge about as variously construed by James (1918), Dewey (1933), Schutz (1967), and Merton (1972). For the purposes of this paper, the term empirical-analytic will be used to describe the paradigm structuring quan- titative research, and the term interpretive will be used to describe the paradigm underlying qualitative research. Most commonly, the empirical-analytic paradigm has been associated with positivism. There are many varieties of positivism (Phillips, 1983). Comptean
  • 175.
    positivism holds thatthe scientific method can be applied to human experiences (Phillips, 1983). Thus, researchers within the Comptean tradition focus upon observable, objectively determinable phenomena. In contrast, logical positivism is marked by an intense dislike of metaphysics and aims to remove the idea from both the natural and the human sciences (Phillips, 1983). Logical positi- vists also believe in the verifiability principle of meaning, which states that something is meaningful if and only if it is verifiable empirically--directly by observation through the senses. From the verifiability concept arose the idea of operational definitions (Phillips, 1983). Other terms used to describe this para- digm include hypothetico-deductive and objectivist (Moss, 1990). The description of the interpretive inquiry paradigm has a wider range of descriptors. Interpretive inquiry can be described as phenomenological, her- meneutical, experiential, and dialectic. Naturalistic, inductive, and relativist are some of the other terms used to describe the interpretive paradigm (Moss, 1990). Each of these terms is difficult to describe in brief or with precision. It needs to be made clear that although many of these terms can be identified as interpretive research, caution is required from equating it with any one of them. This is due to the slight variation in assumptions concerning
  • 176.
    different interpre- tive approaches.~ As will be seen later, all the interpretive research traditions, however, generally share common assumptions about methodology, ontology, and epistemology. 7 Methodological and Ontological Differences The description of the paradigms begins by comparing the researcher's role and relationship to the setting under the two paradigms, and by identifying the epistemological and validity assumptions underlying the choice of role and rela- tionship. ~ Knowledge and understanding of a college or university situation can be acquired in two ways: (1) by studying, "objectively," data generated by the situation, and (2) by becoming part of the situation by understanding participant 544 HATHAWAY views of it. We can come to "know" the chemistry and psychology departments by examining faculty research productivity, enrollment statistics, questionnaire results, or GRE subject tests; or, alternatively, by functioning within these de- partments for a period o f time talking with faculty, students, and staff. Empirical-analytic inquiry is characterized by the researcher's
  • 177.
    detachment from the organizationalsetting under study (Eisner, 1981; Phillips, 1983; Smith, 1983a, 1983b). The detachment derives, in part, from the assumption that the object under study is separate from, unrelated to, independent of, and un- affected by the researcher (Eisner, 1981; Smith, 1983a, 1983b). The mind is separate from reality (Smith, 1983a, 1983b) and truth is defined as a corre- spondence between our words and that independently existing reality (Smith and Heshusius, 1986). In other words, "there are social facts with an objective reality apart from the beliefs of individuals" (Firestone, 1987, p. 16). Physics provides an ideal example. The objects of interest are measured with instru- ments, the data are analyzed to determine if logical pattems seem to exist, and rational theories are constructed to integrate, explain, and perhaps predict a multitude o f facts. Underlying the detachment of the researcher inquiring from an empirical-analytic perspective are critical ontological assumptions: the re- searcher is guided by belief in an external reality constituted of facts that are structured in a law-like manner (Firestone, 1987). In essence, researchers con- ducting inquiries within this paradigm are hoping to document laws that struc- ture reality. In contrast, inquiry for the interpretive paradigm carries with it
  • 178.
    the assump- tions thatthe researcher can best come to know the reality of a situation by being there: by becoming immersed in the stream of events and activities, by becoming part o f the phenomenon o f study, and by documenting the under- standing o f the situation by those engaged in it (Firestone, 1987; Herriot and Firestone, 1983; Howe, 1985; Jacob, 1988; Smith, 1984). Jacob (1988) states, "Qualitative research has been characterized as emphasizing the importance o f conducting research in a natural setting, as assuming the importance o f under- standing participants' perspectives, and as assuming that it is important for researchers subjectively and empathetically to know the perspectives of the par- ticipants" (p. 16). Knowledge is validated experientially (Firestone, 1987; Her- riot and Firestone, 1983; Howe, 1985; Jacob, 1988; Smith, 1984). Underlying the interpretive paradigm is a very different set of epistemological assumptions from those o f the empirical-analytic paradigm. Fundamental to the interpretive paradigm is the belief that knowledge comes from human experience, which is inherently continuous and nonlogical, and which may be symbolically repre- sentable (Firestone, 1987; Herriot and Firestone, 1983; Howe, 1985; Jacob, 1988; Smith, 1984). Reality is constructed by those participating in it, and un- derstanding the reality experienced by the participants guides
  • 179.
    the interpretive researcher. Truthis "a matter o f socially and historically conditioned agree- INSTITUTIONAL RESEARCH 545 ment" (Smith and Heshusius, 1986, p. 6). An interpretive researcher would not look for the laws governing reality because, ultimately, reality is constructed and understood differently for each individual (Taylor, 1987). For example, an interpretive researcher could not describe the "objective" culture of an aca- demic department, but could describe the culture as seen by those participating in it. Some would argue that one can never understand the reality of others, but only be able to articulate one interpretation of it (Cziko, 1989; Dilthey, 1990; Kent, 1991). The researcher's role in empirical-analytic inquiry can be best described as that of onlooker. Since researchers operating from an empirical- analytic para- digm adhere to the concept of a mind-reality duality, researchers simply need to look around in the environment to document objective reality. Quantitative re- searchers are detached to avoid personal bias infringing on the description of reality (Firestone, 1987). Empirical-analytic research presupposes an indepen-
  • 180.
    dent reality andthen investigates how we are a part of that reality and how we can know that reality (Firestone, 1987; Smith 1983a, 1983b; Smith and Heshusius, 1986). Subsequently, the researcher is a detached observer, looking at reality and attempting to understand its complexities and relationship to those doing the observation. The researcher may use a telescope, microscope, survey, or assessment instrument when viewing a selected piece of the world; such use allows the researcher to remain detached, an essential feature of empirical- analytic inquiry. What the researcher sees (i.e., data, coded interviews) are taken prima facie as indicators of "reality." For interpretive inquiry, the researcher becomes an actor in real situations? The researcher must attend to the total situation and integrate information from all directions simultaneously--interviews, observations, and collected cultural artifacts (Denzin, 1971; Herriot and Firestone, 1983; Howe, 1988; Smith, 1984; Taylor, 1987). The relevant world is the field surrounding the individual actor/ researcher (Denzin, 1971; Herriot and Firestone, 1983; Howe, 1988; Smith, 1984). Researchers engage what is being researched to understand what is tak- ing place. They identify what they know about what they are studying to eluci- date the understanding they are bringing to the situation (see e.g., McCracken,
  • 181.
    1988; Rabinow andSullivan, 1987). "It is by drawing on their understanding of how they themselves see and experience the world that they can supplement and interpret the data they generate" (McCracken, 1988, p. 12). The re- searcher's knowledge can be used as a guide, directing the researcher to possi- bilities and insights into that which is being researched (McCracken, 1988). For this reason, universal law and generalizability is limited because reality is a constructed concept and a researcher's interpretation is also a constructed part of the reality observed. Reality for those being studied is different for everyone in the researcher's field of vision. Another difference between the two paradigms is the source of the analytical 546 HATHAWAY categories around which data are organized. In a typical piece of empirical- analytic research, the investigator uses a theoretical framework from which to preselect a set of categories that will guide the inquiry (Firestone, 1987; Howe, 1985; Smith, 1983a, 1983b; Smith and Heshusius, 1986). The goal is to isolate and define categories precisely before the study is undertaken, and then to de- termine the relationships between them (Firestone, 1987; Howe,
  • 182.
    1985; Mc- Cracken, 1988;Smith, 1983a, 1983b; Smith and Heshusius, 1986). Hypotheses are phrased in terms of these categories, and only those data pertaining to them are collected (Howe, 1985; McCracken, 1988). The life of a college or univer- sity microenvironment (i.e., academic department, student affairs office) is viewed through the lens of a limited number of categories. For example, when investigating the supervisory style of student affairs middle managers, an insti- tutional researcher could apply categories of human development to see if these managers engage in situational supervision. At the extreme, some might argue that the reality being viewed is being actively structured by the categories em- ployed by the researchers to investigate the phenomenon of interest. Empirical-analytic researchers may derive their a priori categories from per- sonal beliefs or experience, from theoretical formulation, or from their own or others' interpretive research (Heyl, 1975; McCracken, 1988). In the case of interpretive inquiry, there are, generally, no intentionally prescribed categories to constrain the researcher (Denzin, 1971; Eisner, 1981; Howe, 1988; Shulman, 1981; Smith, 1983a, 1983b). Instead, the interpretive researcher attempts to identify emergent themes within an understanding of the respondent's view-
  • 183.
    point of thecontext (Denzin, 1971; Eisner, 1981; Shulman, 1981; Smith, 1983). Features are noticed and identified through an interpretive, iterative process whereby data and categories emerge simultaneously with successive experience (McCracken, 1988). The process represents an experiential exploration and is particularly suited to early inquiry into new research territory (Denzin, 1971; Firestone, 1987; Smith and Heshusius, 1986). Interpretive inquiry is useful for generating tentative categories grounded in the concrete circumstance of a par- ticular situation. Such emergent categories may subsequently be used as the a priori categories guiding the more deductive, hypothesis-testing empirical-ana- lytic approach. A caveat must be noted to the process just described. Some may argue that the idea of viewing a situation or phenomenon for "emergent" themes is unat- tainable. Phenomenologists and hermeneuticists might disagree with the de- scription just provided. For them, a situation or occurrence cannot be compre- hended without one's own knowledge about the situation. Everyone has some idea of the phenomenon at which they are looking and these ideas shape what is being seen. In other words, the "emergent" themes that are being observed may be seen because of the particular knowledge or ideas possessed by the re-
  • 184.
    searcher before thestart of the research. Phenomenologists believe that the INSTITUTIONAL RESEARCH 547 researcher's preknowledge can be identified and bracketed out when viewing a phenomenon (McCracken, 1988). In contrast, a hermeneuticist would disagree and argue that one can never remove one's own preknowledge from the investi- gation (Kvale, 1983; Packer and Addison, 1989). One cannot understand the situation without preknowledge, because preknowledge assists in understanding what is being seen. A further difference is the aim of inquiry. The aim of inquiry for the empiri- cal-analytic paradigm is to generalize from the particular to construct a set of theoretical statements that are universally applicable (Donmoyer, 1985; Fire- stone, 1987; Garrison, 1986; Howe, 1988; McCracken, 1988; Smith, 1983a, 1983b; Smith and Heshusius, 1986). The institutional research done in the em- pirical-analytic paradigm aims to develop understanding of classes of higher education phenomena, rather than to focus on particular instances in particular settings. Interpretive inquiry, however, is directed toward the unique situation or what Lewin (1951) calls a focus on the whole and the
  • 185.
    individual's present situation. Theaim of interpretive inquiry is to describe in detail a specific situation or phenomenon under study. The situationally relevant products of qualitative inquiry serve both practical and theoretical purposes (Jacob, 1988; McCracken, 1988). They can provide guides for action in the immediate situa- tion and ideas for developing hypotheses to guide quantitative inquiry (Miles and Huberman, 1984). 'o Epistemological Differences The different paradigms are also associated with different types of knowl- edge. The aim of situation relevancy pursued in interpretive research is served by knowledge of the particular phenomenon (i.e., college or university, aca- demic department, etc.) under study (McCracken, 1988; Mishler, 1986). The aim of generalizability sought by empirical-analytic research is served by the development of universal knowledge. Interpretive inquiry focuses on the partic- ular: the knowledge that is infused with human organization and human inter- est, as represented by the situation under study (Bernstein, 1976, 1983; Mc- Cracken, 1988; Mishler, 1986). For the interpretive paradigm, knowledge is knowledge only as understood within the social context in which it takes place (Guba, 1987; Guba and Lincoln, 1981; McCracken, 1988;
  • 186.
    Mishler, 1986; Smith, 1983a,1983b; Smith and Heshusius, 1986). The meaning of a particular utterance or interaction can be understood and has meaning only within the specific context in which it occurred (McCracken, 1988; Mishler, 1986). In the extreme, generalizability within the empirical-analytic inquiry implies a disso- ciation of universal knowledge from human interest (Habermas, 1971). And, at the other extreme, qualitative inquiry implies a preoccupation with the idio- syncratic." 548 HATHAWAY Knowledge for both paradigms is further differentiated by what researchers consider to be data and the level at which they consider issues of meaning. In interpretive inquiry, the aim of understanding a particular situation requires that researchers make direct experiential contact with the phenomena under study (e.g., classroom, academic department, etc.). Understanding the events, activ- ities, and utterances in a specific situation requires a complex appreciation of the overall context in which the phenomenon occurs (McCracken, 1988; Mish- ler, 1986). Context refers to the complete fabric of local culture, people, re- sources, purposes, earlier events, and future expectations that
  • 187.
    constitute time- and-space backgroundof the immediate and particular situation (Denzin, 1971; Guba, 1987; Guba and Lincoln, 1981; McCracken, i988; Mishler, 1986; Smith, 1984). Facts have no meaning in isolation from the setting (Herriott and Fire- stone, 1983; McCracken, 1988; Mishler, 1986). Meaning is developed from the point of view of the participant (Firestone, 1987; McCracken, 1988; Mishler, 1986; Smith, 1983a, 1983b). Interpretive research yields knowledge that is con- nected to the participant's definition or perspective o f the situation, what Rogers (1951) has termed the "phenomenal field" of the person. Researchers involve themselves directly in the setting under study in order to appreciate organizational phenomena in light of the context in which they occur and from the participants' point of view. In empirical-analytic inquiry, the aim of developing universal principles of institutional life necessitates stripping away the idiosyncrasies o f the particular phenomenon studied to reveal what is generally applicable to all similar situa- tions (Firestone, 1987; Garrison, 1986; Howe, 1985; Smith, 1983a, 1983b; Smith and Heshusius, 1986; Soltis, 1984). The separation of the universal from the particular is accomplished through several processes. With the aid of sam- pling, aggregation, and other analytic techniques, the
  • 188.
    uniqueness of individual academicdepartments or classrooms is randomized, controlled for, and other- wise "averaged," revealing the core of presumed common truths. The validity of such efforts relies on the comparability of measurements across observations, settings, and times, as well as the completeness with which the observational procedures and situations are documented. Hence, the concern with instrumen- tation, specification, precision, and adherence to methodological assumptions (i.e., sampling is random, variables are normally distributed). Empirical-analytic research is designed to be detached from, and independent of, a specific situation under study in a particular organization, academic de- partment, or classroom. The researcher determines the frequencies of, and asso- ciations among, events with respect to a set of hypothesized categories and relationships (Firestone, 1987; Garrison, 1986; Howe, 1985; Smith, 1982a, 1983b; Smith and Heshusius, 1986; Soltis, 1984). Meaning is assigned to events on the basis of a priori analytic categories and explicit researcher-free procedures. The spectrum of a phenomenon is filtered through the researcher's INSTITUTIONAL RESEARCH 549
  • 189.
    preset categories; elementsrelated to the categories are selected, coded as data, and simultaneously given meaning by the categories (Firestone, 1987; Garrison, 1986; Howe, 1985; Smith, 1983a, 1983b; Smith and Heshusius, 1986; Soltis, 1984). As a result, data are considered factual when they have the same mean- ing across situations and settings. That is, they are context-free. AN ACADEMIC PROGRAM REVIEW EXAMPLE To illustrate how the underlying philosophical grounds of the two paradigms shape an approach to an institutional research question, a hypothetical example of an academic program review is presented. The example developed focuses on an English department's interest in whether it has successfully implemented its new focus on critical thinking skills and what impact the focus has on var- ious outcomes of interest, including faculty workload. The assumption behind the design is the belief that interactive and collaborative class discussion will facilitate critical thinking skills more so than the normal faculty lecture format. By engaging with their classmates over course-assigned texts, the department hopes that the students will reflect on their own perspectives and interpreta- tions, but also be challenged to better articulate what they believe the texts to be saying. In addition, the department hopes that the increase in skills o f articu-
  • 190.
    lation will translateinto better writing and better academic performance in other writing-based classes as well as better job placement upon graduation. Table 2 highlights some of the major differences between an empirical-ana- lytic and an interpretive approach to this study. Comparing the aims o f the study, one notices that empirical-analytic institutional researchers would look to document the implementation of classroom discussions. They would hypothes- ize prior to the study what they think would occur, for example, what "types" of interactions they would see. In this case, they would hypothesize that discus- sions would facilitate the writing skills of those students participating in the study and compare them to a group of students in a control group who were exposed to the traditional lecture format. They would want to describe changes that occur, whether they be the presence or absence o f what they expected to occur. In contrast, interpretive researchers' intention would be to explain the content and the processes of the discussions occurring in the classroom with the discussion intervention. They want to document the understanding of the inter- vention from the participants' viewpoints and explicate any unpredicted and emergent themes. The aims of the study are structured by the underlying
  • 191.
    assumptions guiding the paradigms.On the interpretive side, the assumption that reality is con- structed directs the researchers to attempt to document how the participants understand and experience the critical thinking focus o f the department and how faculty view the impact on their workload. In contrast, the empirical-ana- 550 HATHAWAY TABLE 2. Hypothetical Academic Program Review Empirical-Analytic Interpretive (quantitative) (qualitative) Stated aims of the study Design details Material obtained Form of the analysis Findings To document implementation of critical thinking component. To chronicle (presence or ab- sence of predicted) changes
  • 192.
    Summative evaluation (decision makingand accountability). Observation of classes for 3 months. Interviews with 6 faculty. Surveys of faculty, students, and graduates. 3 focus group discussions. Observation notes of conversa- tion "gist." Coded for "indicators" of goal attainment (a priori categories). Multiple-choice survey questions (a priori categories). Course grades. Statistical assessment of change over time in coded observations, collapsed over interview and group discussions. Statistical comparison of survey responses. Interviews taken at face value as statement of beliefs, attitudes, perceptions. Documentation of faculty work- load hours. No observed differences among faculty, staff, and students. No significant difference in course grades between critical thinking and control groups. Explanation of participant and processes of discussion.
  • 193.
    Document understanding of goalsfrom participant perspec- tives. To articulate unpredicted, emer- gent themes. Formative evaluation (program improvement). Observation, interviews, field notes in 3 classes--selected to contrast for 12 months. Interviews with 6 faculty, 6 stu- dents, 6 graduates. 3 focus group discussions. Transcripts of focus group dis- cussions and interviews. Articulation of goals, aims, feel- ings. Unconscious, preexisting as- sumptions. Rule enforcement--encourage- ment, modeling of goal-directed behavior. Creation of social contexts that encourage goal-directed behav- ior, safe atmosphere, sense of purpose. Discussion of how participants "understand" the goals. Goal-directed behavior is enacted differently by different faculty, staff, and students, due to preex- isting assumptions about depart- ment, and different focus of attention.
  • 194.
    INSTITUTIONAL RESEARCH 551 TABLE2. Continued Empirical-Analytic Interpretive (quantitative) (qualitative) Norms Increase in faculty workload hours. "Indicators" of successful goal- directed behavior are treated as factual. Normative findings: decrease in faculty utterances in class discus- sions, increase in participant-par- ticipant interactions, number of participants speaking, increase in categories such as "substantive," "probing-monitoring," "manage- ment," etc. Increase in faculty/student inter- actions outside of class. Change in type of preparation and feedback given to students, thereby altering faculty percep- tions of workload. Empirical evidence sought of goal-directed behavior. Normative findings: interpretive authority based on persuasive
  • 195.
    justification (use ofevidence and explanation; questioning), and a sense that text is open to differ- ent readings, vs. faculty as au- thority and guide, with the sense that there is a single text mean- ing. lytic r e s e a r c h e r s ' a s s u m p t i o n that there is a " t r u e " reality w o u l d direct t h e m to d e t e r m i n e patterns o f relationships a m o n g n u m e r o u s v a r i a b l e s (race, gender, p r e v i o u s English classes, as well as c o u r s e grades, p o s t g r a d u a t i o n j o b p l a c e - m e n t and p e r f o r m a n c e ) and critical thinking and generate laws e x p l a i n i n g h o w the critical thinking is h a v i n g an i m p a c t (e.g., d o c u m e n t i n g any increase or decrease in faculty w o r k l o a d hours). T h e interpretive researchers w o u l d resist g e n e r a t i n g e x p l a n a t i o n s about h o w students are e x p e r i e n c i n g the c o m p o n e n t , arguing that e a c h student and faculty m e m b e r constructs a different understand- ing and, therefore, g e n e r a l i z e d e x p l a n a t i o n s are not possible. T h e t w o p a r a d i g m s also h a v e different design details. T h e goal o f interpretive research is to get as close to d e s c r i b i n g the p a r t i c i p a n t s ' understanding as possi- ble. O b s e r v i n g three classes and i n t e r v i e w i n g a select f e w students, faculty, and graduates p r o v i d e s researchers with an o p p o r t u n i t y to a n a l y z e transcripts in detail. S u b s e q u e n t l y , they can c o m p a r e transcript
  • 196.
    analysis (1) tosee if the inter- vention w a s i m p l e m e n t e d the s a m e w a y b y faculty, (2) to d o c u m e n t h o w the students and f a c u l t y understood the intervention, and (3) to d o c u m e n t partici- pant u n d e r s t a n d i n g o f the i m p a c t o f the critical thinking focus. In contrast, e m p i r i c a l - a n a l y t i c r e s e a r c h e r s m a y o b s e r v e five classes and c o d e o b s e r v a t i o n by a priori " i n d i c a t o r s " o f i n t e r a c t i v e / c o l l a b o r a t i v e learning (i.e., c o d i n g certain types o f interactions and c o m p a r i n g b e t w e e n classes). B y a p p r o a c h i n g class- r o o m o b s e r v a t i o n s in such a m a n n e r (i.e., a priori indicators), these qualitative 552 HATHAWAY methods (interviews) are being done from within empirical- analytic framework. This highlights the point that just doing an interview does not necessarily indi- cate one is engaged in qualitative (interpretive paradigm) research. It is the assumptions being made about methodology, ontology, and epistemology that determine whether the interview is truly qualitative. Faculty interviews would be coded the same way as well. One of the major differences between the two approaches is the implementa- tion of an assessment instrument. Within the empirical-analytic paradigm, the
  • 197.
    assessment instrument itemswould be constructed beforehand from the re- searchers' ideas of what is critical thinking and what they think should be important outcomes of the new component. For this example, the assessment would be a pretest and posttest comparing critical thinking skills between the critical thinking group and control group. Following the administration o f the tests, the researchers could compare scores between the control and experimen- tal groups. They then would compare the "critical thinking" group with a con- trol group to document any statistically significant differences between the two groups on critical thinking. As one can see from Table 2, the type of material obtained also differs. As mentioned previously, the interpretive approach would yield transcripts o f class discussions and faculty interviews. For the empirical-analytic approach, we would get observation notes coded for indicators o f interactive/collaborative leaning in addition to the survey and assessment instrument information. The form of analysis is one main area of difference between the two ap- proaches. Empirical-analytic researchers would perform statistical analyses comparing the assessment and survey scores of the control group with that o f the critical thinking group. The faculty interviews would be
  • 198.
    taken at facevalue as a statement of the faculty members' beliefs, attitudes, and perceptions. In addition, a mean for specific codes could be calculated and correlations among other variables o f interest could be attained, therefore indicating that the re- searchers were operating within the empirical-analytic paradigm. In contrast, interpretive researchers would analyze the classroom discussion tapes and at- tempt to articulate the goals, aims, and feelings o f the participants. Interpre- tive researchers would look to identify preexisting assumptions on the part o f the participants and document where they see adherence to the critical think- ing component guidelines and where they see instances o f interactive/collab- orative discussion. An interpretive analysis would attempt to identify how the students and faculty understood the class discussion and the critical thinking emphasis. For this hypothetical program review, let us say that there were no differ- ences between the classes who engaged in the discussion experience and those who did not. In the empirical-analytic paradigm, this result would be indicated by no significant difference between the two groups on the assessment instru- ment. In addition, student grades in other writing-based courses were not signif-
  • 199.
    INSTITUTIONAL RESEARCH 553 icantlydifferent between the two groups and graduates. The interpretive re- searcher would note that the intervention was implemented differently by differ- ent faculty due to preexisting assumptions about the text and the intervention, and the focus within the intervention of each faculty member. The empirical- analytic researcher would conclude that the critical thinking focus was not sig- nificantly improving upon what was already being done in the department whereas the qualitative researcher would conclude the focus was not imple- mented the way it was intended or that it was implemented differently depend- ing on the faculty member involved and student perception of the department. Finally, the norms adhered to are different between the two paradigms. For empirical-analytic researchers, the indicators for successful impact o f the criti- cal thinking focus are treated as factual. The evidence sought are indicators whereas interpretive researchers would look for empirical evidence (observa- tions) of critical reflection and how critical reflection is "understood" by depart- mental participants. For this example, let us say that there were changes in the discussion dynamics over the course of the observation period.
  • 200.
    Empirical-ana- lytic researchers wouldsee a decrease in faculty comments over the three months of observations with a concomitant increase in student- student interac- tions. There is also a corresponding increase in different coding categories. On the other hand, interpretive researchers would notice that those assuming the authority during the discussion are the ones able to build more persuasive argu- ment for their text interpretations. Interpretive researchers would note that the intervention was implemented in slightly different ways with one group assum- ing responsibility and believing the texts to be open to multiple interpretations. Other groups would believe the text has one meaning and would then rely on the faculty to guide them to that meaning. Overall, therefore, empirical-analytic researchers may conclude that the criti- cal thinking focus was not successful as indicated by the nonsignificant find- ings. In addition, they would see the increase in faculty workload and might conclude that the time invested in faculty is not translated into the hoped-for positive outcomes. In contrast, interpretive researchers would indicate that they round that different faculty articulated different understandings of the critical thinking focus, and therefore, implemented it differently. In addition, interpre- tive researchers would articulate faculty perceptions o f how
  • 201.
    their workload hours changedin terms of quality, that they were not just spending more time, but that time entailed more intense preparation for class to ensure critical en- gagement during discussion, as well as more attention to the type o f feedback given to students to facilitate their critical thinking skills. IMPLICATIONS FOR INSTITUTIONAL RESEARCH As in everyday life, institutional researchers need both modes o f inquiry, both ways of knowing, and both kinds of knowledge to advance understanding 554 HATHAWAY of our specific college or university. Most social scientists and educational re- searchers have typically advocated the use of one or the other mode of inquiry. In contrast, institutional researchers tend to rely on empirical- analytic research more regularly (Peterson, 1985a, 1985b). The reasons for the preference for empirical-analytic research are elusive. Perhaps it stems from an artifact left over from the social sciences or that the interpretive paradigm has not yet been seen as a viable and useful tool for understanding colleges and universities. This artifact entails the drive for institutional researchers to have their work
  • 202.
    viewed as basedon "true science." Despite the success and usefulness o f empir- ical-analytic research in institutional research, its limitations for the social sci- e n c e s - a n d institutional r e s e a r c h - - h a v e become increasingly apparent and o f concern recently (Donmoyer, 1985; Eisner, 1981, 1983; Firestone, 1987; Garri- son, 1986; Giarelli and Chambliss, 1988; Howe, 1985, 1988; Hutchinson, 1988; Lincoln and Guba; 1985; Marshall, Lincoln, and Austin, 1991; Peterson and Spencer, 1993; Sherman and Webb, 1988). Empirical-analytic research system- atically overlooks critical features that often render the results epistemologically limited (Guba, 1987; Guba and Lincoln, 1981). Such features include the defi- nition of human action in specific settings, the actor's particular definition o f his/her situation, the human interest of the actor, and the historical context o f the situation. These issues are exemplified by the program review example described previously, particularly in reference to how the faculty and students understood the critical thinking focus. Each faculty member had different per- ceptions of the component, perceptions possibly influenced by institutional cul- ture and climate. The empirical-analytic approach neglects this information. These shortcomings can be overcome by qualitative research techniques. Interpretive research, however, may appear replete with
  • 203.
    subjectivism and be viewedby university administrators as having questionable precision, rigor, or credibility. It may be easier for an administrator to make a decision based on findings from a large sample rather tlaan trust a description o f five case studies or five in-depth interviews. University administrators need to make decisions on what they think is a "typical" or "average" case that holds true across var- ious university environments or in particular departments. One cannot fault an administrator for being uncomfortable basing a policy decision on five or six well-described cases when an empirical-analytic approach (with accompanying large database) might provide a better opportunity to generalize. However, these shortcomings can be overcome by empirical-analytic research. Institutional research is currently characterized by two broad approaches. One is based on the assumptions that there exists a true reality, whereas the other is based on the assumption that there is no true reality but a reality that is constructed by shared understandings of participants. Both are meth- odologically precise. One utilizes techniques that produce results generalizable across contexts, but neglects the reality of institutions; and the other provides
  • 204.
    INSTITUTIONAL RESEARCH 555 theresearcher with in-depth knowledge that often is not generalizable. Al- though educational researchers and social scientists have debated the merits o f combining the approaches, for institutional researchers, using both approaches can only strengthen the rigor from which they approach their assigned tasks. However, as we have seen, the choice embodies not a simple decision between methodologies, but an understanding of the philosophical assumptions concern- ing reality, the role of the researcher, what is knowledge, and what are data. By using both approaches, institutional researchers can strengthen the results of their endeavors. Institutional researchers need to identify and refer to exemplars of good research--research that is both methodologically precise and grounded in understanding of the philosophical assumptions undergirding both ap- proaches. Empirical-Analytic and Interpretive Research Used Together Institutional research studies require that both approaches be simultaneously pursued, either by different researchers or by a single researcher. Of course, it must be acknowledged that some questions are more amenable to being investi- gated by one approach, but using both enhances institutional researchers' ability
  • 205.
    to understand "whatis going on." Each mode offers distinctive advantages, suggesting circumstances (type of problem, state of knowledge, unit o f analysis, researchers' role and purpose) in which one may be more appropriate. Qualita- tive research is more useful for exploring institutional phenomena, articulating participants' understandings and perceptions, and generating tentative concepts and theories that directly pertain to particular environments (e.g., academic de- partments, specific residence halls). By yielding in-depth knowledge o f particu- lar situations, it also more directly serves practitioners' and administrators' needs. The policies and/or decisions based on this type o f interpretive informa- tion may be more directly suited to the specifics o f the milieu from which it was derived. Quantitative research is suited to theory testing and developing universal statements. It also provides a "general" picture o f an institutional situation or academic department climate. The choice of approach will depend on a number o f factors. First, the choice will depend on the institutional researcher's personal training, cognitive style, and preference. Second, the choice will no doubt depend on those being re- searched. For example, some individuals may be particularly uncomfortable with the idea of being interviewed and others may not like being filmed so that
  • 206.
    their interactions withstudents can be analyzed. Third, the choice could also depend on the intended audience for the research. Some audiences may want a concise summary of findings more easily produced by empirical-analytic in- quiry than in-depth articulations of subjects' realities. Fourth, the choice may depend on time and m o n e y - - i s s u e s often on the minds o f institutional re- 556 HATHAWAY searchers. Often, people assume that qualitative research involves a larger in- vestment of time. In reality, the time needs for both quantitative and qualitative research may be close to equal, just distributed differently. For quantitative research, much of the time is spent developing surveys, distributing them, com- piling the data, analyzing the data, and presenting the results. Interpretation o f the results is a relatively small portion of the overall time spent on a quantita- tive study. On the other hand, the majority of time in qualitative research is spent on interpretation, analyzing pages o f transcripts, viewing videotapes over and over, while the time spent on the collection of data is a relatively small portion of the overall time commitment. Finally, one cannot ignore the history of the institutional research office, what is the preferred
  • 207.
    research approach, and whatthose using the office prefer in terms of research. One cannot suddenly switch from quantitative to qualitative methods without checking the political ramifications o f doing so. In contrasting the two research approaches, the attempt has been to discuss the limitations associated with different ways of knowing. In light of these limitations, to continue the exclusive use of one approach that has characterized institutional research will produce limited results--that is, results that are meth- odologically rigorous but at times inappropriate. Institutional researchers' abili- ties to grasp the breadth, depth, and richness of college and university life are hampered by allegiance to a single mode of inquiry. Institutional researcher efforts to develop comprehensive pictures of college and university phenomena are handicapped when only one (either quantitative or qualitative) approach is advocated and practiced. We can survey regarding the benefits o f a new depart- mental focus and find that the new approach is not increasing student perfor- mance on particular skills. We could find out that there is no improvement, but we will not know exactly why there is no improvement. A survey could point an institutional researcher to the problem, and in-depth interviews with some students could provide the information necessary to begin to
  • 208.
    explain the "no improvement"finding. Institutional researchers can alternate between the two approaches. Peterson (1985a) advocates alternating between quantitative and qualitative research, using findings generated from one approach to generate research questions for the other. In the previous example, there was no significant improvement in skills following a new departmental focus. Using this information, an institu- tional researcher could interview students to see how they experienced the new focus. Through these interviews a researcher could identify common themes (e.g., students feel positive about the focus; however, it is not being imple- mented consistently) that could be used to generate questionnaire items for additional surveys. By alternating between the two modes, an institutional re- searcher could get a more accurate picture of the new departmental focus that may not have been possible using only one approach. INSTITUTIONAL RESEARCH 557 C O N C L U S I O N A m a j o r r e a s o n w h y r e s e a r c h m e t h o d o l o g y in i n s t i t u t i o n a l r e s e a r c h is such an e x c i t i n g a r e a is that i n s t i t u t i o n a l r e s e a r c h
  • 209.
    is not it s e l f a d i s c i p l i n e . I n d e e d , it is hard to d e s c r i b e i n s t i t u t i o n a l research. H o w e v e r , it is a f i e l d c o n t a i n i n g p h e - nomena, events, institutions, p r o b l e m s , p e r s o n s , and p r o c e s s e s , w h i c h t h e m - s e l v e s c o n s t i t u t e the raw m a t e r i a l for i n q u i r i e s o f m a n y kinds. M a n y o f t h e s e inquiries p r o v i d e the f o u n d a t i o n f r o m w h i c h to d e v e l o p p o l i c i e s and institu- tional i n t e r v e n t i o n s . Due to the c o m p l e x i t y o f i n s t i t u t i o n a l r e s e a r c h , the c h o i c e o f r e s e a r c h ap- proach to a q u e s t i o n s h o u l d not be taken lightly. Each a p p r o a c h to an institu- tional r e s e a r c h p r o b l e m or q u e s t i o n b r i n g s its o w n u n i q u e p e r s p e c t i v e . E a c h sheds its o w n d i s t i n c t i v e light on the s i t u a t i o n s and p r o b l e m s i n s t i t u t i o n a l re- s e a r c h e r s s e e k to u n d e r s t a n d ( P e t e r s o n and S p e n c e r , 1993). T h e issue is not c h o o s i n g a q u a l i t a t i v e o r n o n q u a l i t a t i v e a p p r o a c h , but it is d e c i d i n g how an institutional r e s e a r c h e r a p p r o a c h e s the w o r l d . C h o o s i n g an a p p r o a c h is not a d e c i s i o n b e t w e e n m e t h o d s ; each c h o i c e is r e p l e t e w i t h u n d e r l y i n g a s s u m p t i o n s about reality. R e s e a r c h m e t h o d s are not m e r e l y d i f f e r e n t w a y s o f a c h i e v i n g the s a m e end. T h e y c a r r y w i t h t h e m d i f f e r e n t w a y s o f a s k i n g q u e s t i o n s and often different c o m m i t m e n t s to e d u c a t i o n a l and s o c i a l i d e o l o g i e s . T h e a t t e m p t here has b e e n to c l a r i f y the d i s t i n c t i o n b e t w e e n the two a p p r o a c h e s so that the two a p p r o a c h e s are v i e w e d as m o r e than s i m p l y a l
  • 210.
    t e rn a t i v e m e t h o d s . A s i n s t i t u t i o n a l r e s e a r c h e r s e m p l o y t h e i r crafts, they m a k e a m u l t i t u d e o f d e c i s i o n s c o n c e r n i n g r e s e a r c h m e t h o d s . T h e s e d e c i s i o n s h a v e a d i r e c t i m p a c t on how they m a k e m e a n i n g and h o w r e a l i t y is s t r u c t u r e d and u n d e r s t o o d b y insti- tutional r e s e a r c h e r s and their c o n s t i t u e n c i e s . In s o m e w a y s , the c h o i c e o f q u a n - titative and q u a l i t a t i v e a p p r o a c h e s c r e a t e s the r e a l i t y w e are a t t e m p t i n g to d i s - cover. By m a k i n g a c h o i c e b e t w e e n q u a n t i t a t i v e o r q u a l i t a t i v e inquiry, " t o a s i g n i f i c a n t extent, w e c h o o s e our w o r l d v i e w " ( A l l e n d e r , 1986, p. 188). F o r institutional r e s e a r c h e r s , it is not j u s t a c h o i c e b e t w e e n " d o i n g i n t e r v i e w s " or " c o n d u c t i n g a s u r v e y " ; it is a c h o i c e b e t w e e n a s s u m p t i o n s a b o u t the w o r l d . NOTES 1. Definitions of some terms are in order here. Commensurability refers to the ability to compare philosophical underpinnings without a neutral frame of reference. Epistemology is the investi- gation or study of the origin, structure, methods, and validity of knowledge (Runes, 1983). Ontology is a theory as to what exists (Urmson and Ree, 1989) or the assumptions about existence underlying any conceptual scheme, theory, or system of ideas (Flew, 1984). Phenom- enology is the study of how the world is experienced from the actor's/subject's own frame of reference (Patton, 1980). Hermeneutics is the art and science of
  • 211.
    interpreting the meaningof texts which stresses how prior understandings and prejudices shape the interpretive process (Runes, 1983). Dialectic refers to a process through which what is known emerges within an interaction between the knower and what is to be known. 558 HATHAWAY 2. Please see Guba and Lincoln (1988) for an in depth treatment of the distinction between method and methodology. 3. Moss (1990) provides a brief and useful discussion about the distinction among the terms incompatible, incommensurable, and incomparable. The reader is directed to Moss's comments as well as to Bernstein's (1983) in-depth discussion concerning the definitions of these terms. 4. With the advent and development of critical theory and postmodernism, some (Lather, 1991a, 1991b) would argue that we are currently immersed in a crisis over what it means to do research. The reader is directed to Darder (1991), Giroux (1988), and Gore (1993) for descrip- tions of critical theory, and Bauman (1992), Giroux (1991), and Rosenau (1992) for descrip- tions of postmodernism. 5. For this paper, the distinction between the quantitative and qualitative paradigms is being used as a heuristic device. One must note, however, that this distinction may oversimplify the var-
  • 212.
    ious philosophical differenceseven within the two paradigms. 6. To distinguish the different interpretive approaches is beyond the purview o f this paper. The reader is directed to Denzin and Lincoln's (1994) Handbook of Qualitative Research and Lancy (1993) for in-depth explorations of the distinctions. 7. Firestone (1987) argues that the two paradigms can also be distinguished by differing rhetoric. In essence quantitative and qualitative methods "lend themselves to different kinds of rhetoric" (p. 16). Subsequently, each method type uses different presentation techniques and means o f persuasion to express assumptions about methodology, ontology, and epistemology and to con- vince readers about conclusions. 8. For the sake of this paper, methodology, ontology, and epistemology have been separated for convenience and clarity. Generally, however, these concepts are so intertwined that discussing one almost necessitates discussing one or both of the other. For example, discussing what each paradigm believes to be "reality" (ontology) almost dictates what can be known (epistemology) about that reality and how that reality can be measured (methodology). 9. The degree of engagement varies depending on the qualitative approach being used. For exam- ple, nonparticipant observers are not actively involved in the situation whereas participant observers attempt to assume a role to understand the reality as constructed and comprehended by those in the situation.
  • 213.
    10. Smith andHeshusius (1986) raise a common concern among many educational researchers that qualitative research should not be thought o f as just a procedural variation of quantitative research. The reader should note this caveat when entertaining the notion of using qualitative data collection and analysis as an avenue from which to generate categories for quantitative research. 11. It would be misleading to imply that qualitative research is not concerned with generalizability. Firestone (1993) highlights the various arguments for, and the types o f generalizability within, qualitative research, and Kirk and Miller (1986) discuss reliability and validity in qualitative research. REFERENCES A l l e n d e r , J. S. ( 1 9 8 6 ) . E d u c a t i o n a l r e s e a r c h : A p e r s o n a l a n d s o c i a l p r o c e s s . Review of Educational Research 56(2): 1 7 3 - 1 9 3 . B a n k s , J. A . ( 1 9 8 8 ) . Multiethnic Education, 2 n d ed. B o s t o n : A l l y n & B a c o n . B a u m a n , Z. (1992). Intimations of Postmodernity. N e w York: R o u t l e d g e . B e r n s t e i n , R. J. ( 1 9 7 6 ) . The Restructuring of Social and Political Theory. P h i l a d e l p h i a : T h e U n i v e r s i t y o f P e n n s y l v a n i a P r e s s . B e r n s t e i n , R. J. (1983). Beyond Objectivism and Relativism. Science, Hermeneutics, and
  • 214.
    Praxis. P hi l a d e l p h i a : T h e U n i v e r s i t y o f P e n n s y l v a n i a P r e s s . INSTITUTIONAL RESEARCH 559 Bohannon, T. R. (1988). Applying regression analysis to problems in institutional re- search. In B. D. Yancey (ed.), Applying Statistics in Institutional Research, 43-60. San Francisco: Jossey-Bass, Publishers. Bunda, M. A. (1991). Capturing the richness of student outcomes with qualitative tech- niques. In D. M. Fetterman (ed.), Using Qualitative Methods in Institutional Research, pp. 35-47. San Francisco: Jossey-Bass, Publishers. Cziko, G. A, (1989). Unpredictability and indeterminism in human behavior: Arguments and implications for educational research. Educational Researcher 18(3): 17-25. Darder, A. (1991). Culture and Power in the Classroom. A Critical Foundation for Bi- cultural Education. New York: Bergin & Garvey. Denzin, N. K. (1971). The logic of naturalistic inquiry. Social Forces 50: 166-182. Denzin, N. K., and Y, S. Lincoln (1994) (eds.). Handbook o f Qualitative Research. Thousand Oaks, CA: Sage Publications, Inc. Dewey, 3. (1933). How We Think. Massachusetts: D. C. Heath. Dilthey, W. (1990). The rise of hermeneutics. In G. L. Ormiston
  • 215.
    and A. D.Schrift (eds.), Tire Hermeneutic Tradition. From Ast to Ricoeur (pp. 101-114). Albany: State Uni- versity of New York Press. Donmoyer, R. (1985). The rescue from relativism: Two failed attempts and an alternative strategy. Educational Researcher 14(10): 13-20. Eisner, E. W. (1981). On the differences between scientific and artistic approaches to qualitative research. Educational Researcher 10(4): 5 - 9 . Eisner, E. W. (1983). Anastasia might still be alive, but the monarchy is dead. Educa- tional Researcher 12(5): 13-14, 23-24. Fetterman, D. M. (1991). Qualitative resource landmarks. In D. M. Fetterman (ed.), Using Qualitative Methods in Institutional Research, pp. 81-84. San Francisco: Jos- sey-Bass, Publishers. Fincher, C. (1985). The art and science of institutional research. In M. W. Peterson and M. Corcoran (eds.), hzstitutional Research in Transition, pp. 17- 37. New Directions for Institutional Research. San Francisco: Jossey-Bass Inc., Publishers. Firestone, W. A. (1987). Meaning in method: The rhetoric of quantitative and qualitative research. Educational Researcher 16(7): 16-21. Firestone, W. A. (1993). Alternative arguments for generalizing
  • 216.
    from data asapplied to qualitative research. Educational Researcher 22(4): 16-23. Flew, A. (1984). A Dictionary o f Philosophy. London: The Macmillan Press Ltd. Garrison, J. W. (1986). Some principles of postpositivistic philosophy of science. Educa- tional Researcher 15(9): 12-18. Geertz, C. (1973). The interpretation o f Cuhures. New York: Basic Books. Giarelli, J. M., and Chambliss, J. J. (1988). Philosophy of education as qualitative in- quiry. In R. R. Sherman and R. B. Webb (eds.), Qualitative Research in Education: Focus and Methods, pp. 30-43. New York: The Falmer Press. Giroux, H. A. (1988). Schooling and the Struggle for Public Life. Critical Pedagogy in tire Modern Age. Minneapolis, MN: University of Minnesota Press. Giroux, H. A. (1991) (ed.). Postmodernism, Feminism, and Cultural Polities. Albany, NY: State University of New York Press. Gordon, E. W., E Miller, and D. Rollock (1990). Coping with communicentric bias in knowledge production in the social sciences. Educational Researcher 19(3): 14-19. Gore, J. M. (1993). The Struggle for Pedagogies. Critical and Feminist Discourses as Regimes o f Truth. New York: Routledge.
  • 217.
    Guba, E. (1987).What have we learned about naturalistic evaluation? Evaluation Prac- tice 8(1): 23-43. 560 HATHAWAY Guba, E., and Y. Lincoln (1981). Effective Evaluation. San Francisco: Jossey-Bass. Guba, E. G., and Y. S. Lincoln (1988). Do inquiry paradigms imply inquiry meth- odologies? In D. M. Fetterman (ed.), Qualitative Approaches to Evaluation in Educa- tion: The Silent Scientific Revolution, pp. 89-115. New York: Praeger Publishers. Gubrium, J. (1988). Analyzing Field Reality. Newbury Park, CA: Sage. Habermas, J. (1971). Knowledge and Human Interest. Boston, MA: Beacon. Habermas, J. (1988). On the Logic of the Social Sciences (S. W. Nicholsen and J. A. Stark, trans.). Cambridge, MA: The MIT Press. Hall, E. T. (1976). Beyond Culture. New York: Doubleday. Hcrriott, R. E., and W. A. Firestone (1983). Multisite qualitative policy research: Opti- mizing description and generalizability. Educational Researcher 12(2): 14-19. Heyl, J. D. (1975). Paradigms in social science. Society 12(5): 61-67. Hinkle, D. E., G. W. McLaughlin, and J. T. Austin (1988). Using log-linear models in
  • 218.
    higher education research.In B. D. Yancey (ed.), Applying statistics in Institutional Research, pp. 23-42. San Francisco: Jossey-Bass, Publishers. Howe, K. R. (1985). Two dogmas of educational research. Educational Researcher 14(8): 10-18. Howe, K. R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher 17(8): 10-16. Hutchinson, S. A. (1988). Educational and grounded theory. In R. R. Sherman and R. B. Webb (eds.), Qualitative Research in Education: Focus and Methods, pp. 123-140. New York: The Falmer Press. Jacob, E. (1988). Clarifying qualitative research: A focus on traditions. Educational Re- searcher (17(1): 16-24. James, W. (1918). The Principles o f Psychology. New York: Dover. Jcnnings, L. W., and D. M. Young (1988). Forecasting methods for institutional research. In B. D. Yancey (ed.), Applying Statistics in hzstitutional Research, pp. 77-96. San Francisco: Jossey-Bass, Publishers. Kaplan, A. (1964). The Conduct oflnquiry. San Francisco: Chandler. Kent, T. (1991). On the very idea of a discourse community. College Composition and
  • 219.
    Communication 42(4): 425-445. Kirk,J., and M. L. Miller (1986). Reliability and Validity in Qualitative Research. Bev- erly Hills, CA: Sage Publications, Inc. Kuhn, T. S. (1962). The structure o f Scientific Revolutions. Philadelphia: The University of Pennsylvania Press. Kuhn, T. S. (1970). The Structure o f Scientific Revolutions, 2nd ed. Chicago: University of Chicago Press. Kuhn, T. S. (1974). Second thoughts on paradigms. Reprinted in The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago: University of Chicago Press. Kvale, S. (1983). The qualitative research interview: A phenomenological and a her- meneutical mode of understanding. Journal o f Phenomenological Psychology 14(2): 171-196. Lancy, D. (1993). Qualitative Research in Education. New York: Longman. Lather, P. 1991a). Getting Smart: Feminist Research and Pedagogy Within the Post- modern. New York: Routledge. Lather, P. (1991b). Deconstructing/deconstructive inquiry: The politics of knowing and
  • 220.
    being known. EducationalTheory 41(2): 153-173. Lewin, K. (1951). Field Theory in Social Science. New York: Harper. Lincoln, Y. S., and E. G. Guba (1985). Naturalistic Inquiry. Beverly Hills: Sage Publications. INSTITUTIONAL RESEARCH 561 Marshall, C., Y. S. Lincoln, and A. E. Austin (1991). Integrating a qualitative and quan- titative assessment of the quality of academic life: Political and logistical issues. In D. M. Fetterman (ed.), Using Qualitative Methods in Institutional Research, pp. 6 5 - 80. San Francisco: Jossey-Bass, Publishers. McCracken, G. (1988). The Long Interview. Newbury Park, CA: Sage Publications, Inc. Merton, R. (1972). Insiders and outsiders: A chapter in the sociology of knowledge. In Varieties of Political Expression in Sociology. Chicago: The University of Chicago Press. Miles, M. B., and A. M. Huberman (1984). Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher 13(5): 20- 30. Mishler, E. G. (1986). Research Interviewing. Context and Narrative. Cambridge, MA: Harvard University Press.
  • 221.
    Moss, P. A.(1990, April). Multiple Triangulation in Impact Assessment: Setting the Context. Remarks prepared for oral presentation in P. LeMahieu (Chair), Multiple triangulation in impact assessment: The Pittsburgh discussion project experience. Symposium conducted at the annual meeting of the American Research Association, Boston, Massachusetts. Packer, M. J., and R. B. Addison (1989). Introduction. In M. J. Packer and R. B. Ad- dison (eds.), Entering the Circle: Hermeneutic Investigation in Psychology, pp. 13-36. Albany: State University of New York Press. Patton, M. Q. (1980). Qualitative Evaluation Methods. Beverly Hills, CA: Sage Publica- tions, Inc. Petcrson, M. W. (1985a). Emerging developments in postsecondary organization theory and research: Fragmentation or integration. Educational Researcher 14(3): 5 - 1 2 . Petcrson, M. W. (1985b). Institutional research: An evolutionary perspective. In M. W. Peterson and M. Corcoran (eds.), Institutional Research in Transition, pp. 5 - 1 5 . New Directions for Institutional Research, no. 46. San Francisco: Jossey-Bass Inc., Pub- lishers. Peterson, M. W., and M. G. Spencer (1993). Qualitative and quantitative approaches to academic culture: Do they tell us the same thing? Higher
  • 222.
    Education: Handbook of Theoryand Research, Vol. IX, pp. 344-388. New York: Agathon Press. Phillips, D. C. (1983). After the wake: Postpositivistic educational thought. Educational Researcher 12(5); 4 - 1 2 . Pike, K. L. (1967). Language in Relation to a Unified Theory of the Structure of Human Behavior. The Hague: Mouton. Rabinow, P., and W. M. Sullivan (1987). The interpretive turn: A second look. In P. Rabinow and W. M. Sullivan (eds.), Interpretive Social Science. A Second Look, pp. 1 - 3 0 . Berkeley, CA: University of California Press. Rogers, C. R. (1951). Client-Centered Therapy. Boston: Houghton. Rosenau, P. M. (1992). Post-Modernism and the Social Sciences: Insights, Inroads, and Intrusions. Princeton, N J: Princeton University Press. Rossman, G. B., and B. L. Wilson (1985). Numbers and words. Combining quantitative and qualitative methods in a single large-scale evaluation study. Evaluation Review 9(5): 627-643. Runes, D. D. (1983). Dictionary of Philosophy. New York: Philosophical Library, Inc. Schultz, A. (1967). The Phenomenology of the Social World. Evanston, IL: Northwestern
  • 223.
    University Press. Sherman, R.R., and R. B. Webb (1988). Qualitative research in education: A focus. In R. R. Sherman and R. B. Webb (eds.), Qualitative Research in Education: Focus and Methods, pp. 2 - 2 1 . New York: The Falmer Press. 562 HATHAWAY Shulman, L. S. (1981). Disciplines of inquiry in education: An overview. Educational Researcher 10(6): 5 - 1 2 , 23. Smith, J. K. (1983a). Quantitative versus qualitative research: An attempt to clarify the issue. Ed,~cational Researcher 12(3): 6-13. Smith, J. K. (1983b). Quantitative versus interpretive: The problem of conducting social inquiry. In E. House (ed.), Philosophy of Evaluation, pp. 27-52. San Francisco: Jos- sey-Bass Publishers. Smith, J. K. (1984). The problem of criteria for judging interpretive inquiry. Educational Evah~ation and Policy Analysis 6(4): 379-391. Smith, J. K., and L. Heshusius (1986). Closing down the conversation: The end of the quantitative-qualitative debate among educational inquirers. Educational Researcher 15(1): 4 - 1 2 .
  • 224.
    Soltis, J. E(1984). On the nature of educational research. Educational Researcher 13(10: 5 - 1 0 . Stanfield, J. H. (1985). The ethnocentric basis of social science knowledge production. In E. W. Gorden (ed.), Review of Research in Education, vol. 12, pp. 387-415. Washing- ton, DC: American Educational Research Association. Taylor, C. (1987). Interpretation and the science of man. In P. Rabinow and W. M. Sullivan (eds.), Interpretive Social Science. A Second Look, pp. 33-81. Berkeley, CA: University o f California Press. Tierney, W. G. (1991). Utilizing ethnographic interviews to enhance academic decision making. In D. M. Fetterman (ed.), Using Qualitative Methods in Instit,ttional Re- search, pp. 7 - 2 2 . San Francisco: Jossey-Bass, Publishers. Urmson, J. O., and J. Ree (1989) (eds.). The Concise Encyclopedia of Western Philoso- phy and Philosophies. Boston: Unwin Hyman. Yancey, B. D. (1988a). Exploratory data analysis methods for institutional researchers. In B. D. Yancey (ed.), Applying Statistics in Institutional Research, pp. 97-110. San Francisco: Jossey-Bass, Publishers. Yancey, B. D. (1988b). Institutional research and the classical experimental paradigm. In B. D. Yancey (ed.), Applying Statistics in Institutional Research, pp. 5 - 1 0 . San Fran-
  • 225.
    cisco: Jossey-Bass, Publishers. ReceivedMay 23, 1994, “RESEARCH TOPIC- LEARNING AND DEVELOPMENT IN CORPORATE AMERICA” The Role of Your Paradigms- At this stage in the overall conceptualizing of your data-driven project, how do you see the role of qualitative, quantitative, and/or mixed-methods research paradigms adding to (or perhaps detracting from) your work? Based on your decisions for the type(s) of data you would collect, what critiques would you anticipate from others at the institution? Would other perceive your research as providing valuable insights or actionable information? How would you respond? Be sure to ground your discussion in our assigned readings for the week as you identify not only what type(s) of data you would gather, but also what the implications of these data may be. · Your initial post (approximately 200-250 words) should address each question in the discussion directions