SlideShare a Scribd company logo
1 of 329
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Team Diagnostic Survey: Development of an Instrument
Wageman, Ruth;Hackman, J Richard;Lehman, Erin
The Journal of Applied Behavioral Science; Dec 2005; 41, 4;
ProQuest
pg. 373
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further
reproduction prohibited without permission.
Lesson Week 3
This week we work with data analysis. The evaluator will need
to explain how data will be collected and analyzed to their
stakeholders. Most of us, myself included, are not experts in
math or statistics. The point of this lesson is not to make you an
expert but to expose you to the simple data analysis methods
available and to the terminology should you be involved in a
project that requires a higher level of evaluation so you are
better able to understand and judge the accuracy of the results.
First, a statistic can be a number, characteristic, tool or
technique. The data used can be measured by some simple
central tendency characteristics. Examples of these are mode,
mean and median. There are also measures of variability such as
range, percentiles, standard deviation, and variance.
For a good example of how this works in "real life" take a look
at Making Data Collection of Statewide Programs Useful.
Another aspect that deserves to be mentioned is meta-analysis.
This is an analysis of effect size based on the quantitative
results of multiple previous studies of the same or similar
interventions. Shown at this website is an online lecture on how
to conduct meta-analysis. Finally, take the time to view or listen
to the Transparency Through Technology Evaluating Federal
Open Government. This will give you an idea of the importance
of data and evaluation in the Federal
Government. Data.gov offers data sets from which you can pull
for evaluation purposes.
This week we also look at surveys and sampling. In order to
gather the data necessary to analyze the problem or issue, the
researcher could gather data from existing sources, such as the
US Census or they could conduct their own surveys. Other
means of gathering data are interviews with key informants,
focus groups, and incidence rates. Many times a sample survey
will be give to a small representative group and extrapolated out
to the entire population to product projections of the results on
a larger audience. (think about how they determine television
show ratings) At times the researcher may not be familiar with
the topic or area and might conduct what is called a "snowball"
sample where they as the person being interviewed to suggest
others that might help. When I was interviewed by a PhD
student for her research she asked if I could put her in touch
with others that had the background I did. That is an example of
a snowball survey.
For our purposes here, we will concentrate on surveys
and specifically the survey instruments used by the NPS for
their visitor counts.
Part Two Lesson Week Three
Data can be both qualitative (generally numeric) and
quantitative (generally from observations or opinion surveys).
You will also review the four levels of data: nominal (one
level), ordinal (one level but ranked), interval (ranked data
conveyed in equal intervals) and ration (add an absolute zero to
the equation). The Science Museum of Minnesota produces a
very informative website discussing evaluation data in a clear
and simple format. Take a look at their website and explore the
links.
In order to look at the analysis performed on the data we have
looked at so far, take a look at the NPS Visitor Use Statistics
website. For this part of the lesson choose California. When you
are done make sure to explore the parks in your state! Now
choose the Golden Gate National Recreation Area.You will see
a series of reports. Click on the Traffic Counts by location.
Remember those areas where they showed the Inductive loop
counter at the entrance to Lower Fort Mason? Then the amount
subtracted as non-reportable? And finally the application of the
multiplier from our previous lesson? Now you see the result.
This is the data produced by those surveys. Now you have the
data to make month-to-month comparisons as well as year-to-
year. How might these stats be used? During 2015 the National
Parks were asked for this information among others requests:
Consulting firm wanted assistance interpreting visitation data
for a development proposal for a tribe
Bureau of Ocean Energy Management wanted 5 years of
visitation data for some Alaska parks
Glacier National Park wanted to identify their 100 millionth
visitor
A travel magazine wanted to know the ranking (by visitation)
for a particular park
•Co
Next take a look at the Annual Park Recreation Visitation
Graph. Is the visual representation easier to follow? More
difficult? Review the other reports. Presentation of the data is
almost as important as the data itself.
· Week Three
Visitor Survey
· Following is a sample survey instrument for the Persons Per
Vehicle Survey we have been discussing:
INSTRUCTIONS FOR ENTERING DATA ON THE PERSONS-
PER-VEHICLE SURVEY FORM
<PARK NAME>
1. This survey will help your park establish a person per
vehicle multiplier to be used with traffic counts for estimating
the number of people entering the park by vehicle.
· The surveyor conducts the survey for only one (1) hour
during the sample period at each of the sample locations that is
open on the day of the survey.
· If a survey time period is marked AM please conduct a
one hour survey between the hours of 8:00 AM and 12:00 PM.
· If the survey time period is marked PM please conduct a
one hour survey between the hours of 12:00 PM and 5:00 PM.
· The surveyor selects the AM or PM time period when
he/she can safely and completely conduct one hour survey at the
required location on the day listed on the accompanying survey
calendar.
· Please vary your start times during each AM or PM sample
period. Start times do not need to be on the hour but do need to
be conducted for a one hour period.
2. The surveyor fills out the bottom of the form by entering
their name, the date of the survey and the time the survey
begins.
3. To fill out the body of the form, count the number of
people in each vehicle (boats or autos) as they enter the park.
Place a tally mark in the appropriate box representing the
number of persons in that vehicle. (If there are 2 persons in a
vehicle, put a tally mark in column two (2). If there are more
than 6 passengers in a vehicle, put the exact number in column
7+.
4. At the end of each month being surveyed send the
completed forms to:
Name of Person to Contact
National Park Service
NRSS/EQD/SSB
1201 Oakridge Drive
Fort Collins, CO 80525
If you have any questions, please contact Person's name at the
above address or phone 970-225-XXXX.
Thank you for your cooperation
· Week ThreeSurvey Tally
Equipping the Constructivist Researcher: The Combined use of
Semi-Structured Interviews and Decision-Making maps
Reza Mojtahed, Miguel Baptista Nunes, Jorge Tiago Martins
and Alex Peng
Information School, University of Sheffield, Sheffield, UK
[email protected][email protected][email protected][email prote
cted]
Abstract: An interview is a technique used by qualitative
researchers to elicit facts and knowledge about the phenomenon
under investigation using a series of interview questions.
Nonetheless, the establishment of conversation and negotiation
of meaning during the interview process is still challenging for
those who conduct interviews, no matter how skilled or
experienced researchers are with the process. It is felt in
particular that researchers would benefit from the use of an
instrument that, in the course of semi-structured interviews,
would foster an environment where the ideas and meanings
conveyed by informants could be developed and further
discussed in order to achieve a deeper understanding of the
phenomenon under investigation. Therefore, this paper attempts
to develop and introduce decision-making maps as a new
instrument to be used during the process of conducting semi-
structured interviews. This newly proposed instrument is
inspired by the concept and practice of perceptual mapping. The
paper discusses the rationale for proposing the
development and application of decision-making map in the
context of semi-structured interviews, and reflects on the
range of implications for the researcher, for participants, and
for the practice of qualitative research that claims affiliation
with constructivism.
Keywords: inductive research, constructivism, qualitative
interview, perceptual mapping, decision-making map
1. Introduction
Cohen & Manion (1994, p.36) described the constructivist
approach to research as being based on
understanding the world of human experiences. This world of
experiences is continuously shaped through the
human interaction with objects and other subjects. In order to
access and achieve an understanding about
human perceptions, one of the main requirements of the
constructivist approach is the establishment of a
reciprocal and communicational ground between the research
project participants and researchers in the co-
construction of meaning. Eventually this would lead to building
a theory that is based on the experiences of
researchers and that of research participants (Mills et al. 2006).
Several authors have discussed the use of constructivist
epistemological principles in inductive research. The
constructivist paradigm traditionally follows qualitative
research methods, although quantitative methods may
also be used in support of qualitative data (Mackenzie & Knipe
2006). Since constructivist researchers tend to
rely on participants’ viewpoints about the situations under
investigation (Creswell 2003, p.8), the vast majority
of inductive research remains interview-based and interpretivist
in nature. Accordingly, the use of interviews
as a data collection method in inductive research is justified by
its affinity with daily-life conversations and the
centrality of interactions, exchanges, and negotiation of
meaning between two parties (Kvale & Brinkmann
2009), which corresponds to constructivist approaches to
research.
There are different approaches to carry out an interview,
although the dominant characterisation of interviews
is based on the dichotomy between structured and unstructured
interviews (Collins 1998). However, more
varieties of interview styles have been recognised by
researchers (e.g. May 2003, p.121), such as semi-
structured interview, group interview and focus group
interview. Each one of these types follows their own
approach to conduct an interview and to collect the research
data. A clear sign of the differences between
each type of interview styles is the way the interview questions
are formulated and the amount of freedom
given to interviewees in their replies to each interview question
(e.g. Bryman 2012). Nonetheless, the
operationalization of qualitative interviews’ underlying
epistemological principles remains complex and at
times controversial.
Whether the researcher applies a semi-structured or
unstructured interview, there is an unconditional
principle that researchers need to adhere to during the interview
process, which is the capacity of maintaining
ISSN 1477-7029 87 ©ACPIL
Reference this paper as: Mojtahed R, Baptista Nunes M, Tiago
Martins J and Peng A. “Interviews and Decision-Making
maps.”The Electronic Journal of Business Research Methods
Volume 12 Issue 2 2014 (pp 87-95), available online at
www.ejbrm.com
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
Electronic Journal of Business Research Methods Volume 12
Issue 2 2014
social negotiation of meanings between the interviewee and
interviewer. This component is somehow missing
or underachieved during the operationalization of research that
claims a constructivist affiliation. In addition,
most of the tenets of modern inductive approaches such as
thematic analysis, grounded theory or even
phenomenography are predicated in listening to informants’
perceptions of the social world around them,
interpreting them and producing a theory that attempts to
generate a context-bound understanding. This
process contains an inherent artificiality since researchers are
trying to understand social worlds by
interpreting informants’ perceptions without any feedback loop
that enables negotiation and validation of the
adequacy of the interpretation.
To address the challenge of researchers’ exclusive reliance on
the interpretation of interview evidence to
construct their studies, this paper proposes the introduction of a
methodological innovation in semi-structured
interview design: the use of decision-making maps that help
both the researcher and the informant negotiate
meaning, define data and advance interpretations in a
collaborative fashion.
This is particularly important in the context of qualitative
research that is aligned with a constructivist
conceptualisation of knowledge – one that asserts that
researchers must rely upon the "participants' views of
the situation study” (Creswell 2003, p.8). The difficulty in
constructivist research is exactly demonstrating that
the participants’ view of the situation as reported in research
findings is not simply the result of researchers’
interpretive whim, and that negotiation of meaning has in fact
occurred. This is intimately related to what
Denzin and Lincoln (2005) describe as constructivism’s
subjectivist epistemology, in the sense that knower and
respondent are co-creators of understandings (Denzin and
Lincoln 2005). The use of decision-making maps in
conjunction with semi-structured interviews, as advocated in
this paper, enhances and materialises the
opportunities for co-creation of understandings between
researcher and participant, at the moment of data
collection.
Further explanation of this process and its main stages are
advanced in the following sections. A detailed
description of the rationale and process of decision-making
maps is provided in Section 2. Section 3 discusses
the stages, difficulties (and how they were overcome) and
implications of applying the instrument to an
empirical context: a single case study of a UK local city
council’s decision-making process concerning new IS
projects. The paper closes with a recommendation to use
decision-making maps in the process of semi-
structured interviews to promote interaction with informants, to
foster goal-oriented thinking, and to
operationalise social negotiation and co-production of
knowledge.
2. A description of the development of decision-making map
“Researching a problem is a matter of using the skills and
techniques appropriate to do the job required
within practical limits: a matter of finely judging the ability of
a particular research tool to provide the
data required and itself a skill” (Hughes & Sharrock 2006,
p.12).
This section describes the process of developing decision-
making maps as a data collection instrument to be
used in conjunction with semi-structured interviews. A review
of perceptual mapping and its uses in Marketing
research is provided, since the idea to design decision-making
maps for interpretive, interview-based research
stemmed from this field. This is followed by a detailed
explanation of the structure and process of the
proposed data collection instrument.
2.1 What is the perceptual map?
Within Marketing research perceptual maps have been known as
a powerful technique which is used for
designing new products, advertising, determining good retail
locations and developing several other marketing
solutions and applications (Hauser & Koppelman 1979).
Examples of the use perceptual mapping in Marketing
research include Kim’s (1996) perceptual mapping of hotel food
and beverages’ attributes and preferences,
Wen and Yeh’s (2010) investigation of customers’ perceptions
and expectations concerning international air
passenger carriers, or Maltz et al.’s (2011) investigation of
sourcing managers' perceptions of low cost regions.
In general terms, perceptual mapping techniques help
organisations understanding how their products are
perceived by consumers in relation to the different products in
the marketplace. Perceptual mapping
techniques aim at producing a diagrammatic representation of
that perceptual space occupied by
organisations (Kholi and Leuthesser, 1993). A typical
perceptual map will feature the following characteristics:
pair-wise distances between product alternatives that indicate
how closely related products are according to
www.ejbrm.com 88 ISSN 1477-7029
Reza Mojtahed et al
customers’ understanding; a vector on the map that
“geometrically denote[s] attributes of the perceptual
map”; and axes that suggest “the underlying dimensions that
best characterise how customers differentiate
between alternatives” (Lilien and Rangaswamy, 2004:119).
Perceptual maps’ dominant approach to collect
and analyse data on consumers’ perceptions of products is
objectivist, developing in most cases via attribute
based methods (factor analysis) or similarity based methods
(multi-dimensional scaling).
2.2 A qualitative design of decision-making map
A central claim advanced in this paper is that some features of
perceptual mapping techniques can be
developed in qualitative research designs, in conjunction with
semi-structured interviews, provided that the
researchers take into account the particularities of “verbatim
conversation” that occurs between interviewee
and interviewer as the way to find answers for questions such as
“how” and “why” (McNeill & Chapman 2005,
p.22).
Unlike the perceptual mapping techniques used in Marketing
research, the priority is not extracting meaning
from numerical approaches and statistical analysis of the social
facts, or applying multidimensional scaling and
factor analysis to construct a perceptual map.
Furthermore, the perceptual map as used in Marketing research
is the outcome of a technique consisting of
detailed procedures, whereas the qualitative decision-making
map proposed in this paper consists of an
instrument designed to collect information during the semi-
structured interview.
The perceptual map as used in Marketing research includes
three characteristics. Firstly, it has pair-wise
distances between product alternatives that specify how close or
far the products are in the mind of
customers. Secondly, it has a vector on the map that indicates
magnitude and direction in the geographical
space of map. Finally, it displays axes of the map that show the
underlying dimensions that characterise how
alternatives are differentiated by customers.
Based on these fundamental characteristics, we developed the
decision-making map as a data collection
instrument. The details pertaining to the operationalization of
the proposed instrument are advanced in the
following sections. An empirical application of the instrument
is discussed in section 3.
2.2.1 The processes of decision-making map
Departing from the core principles of perceptual mapping, we
have designed a new instrument, which should
be used during the semi-structured interview process. In
practical terms, it requires both interviewees’ and
interviewer’s engagement in producing a diagram on a sheet of
paper (e.g. A3 size) provided by the
researcher.
Although consisting mostly of blank space where the informant
is expected to jot down concepts and ideas,
the diagram provides quadrants organised according to
dominant research perspectives that will be used as
the bases for discussion and conversation between informant
and researcher. The selection of dominant
perspectives is informed by the review of literature in the
substantive area of research study and by earlier
observations of the phenomenon under study. The literature
review helps the researcher identify sensitising
concepts and points of departure – not strict perspectives that
could detract the researcher’s attention from
emergent data. Accordingly and in a similar perspective to that
advocated by Charmaz (2006), the literature
review helps to demonstrate grasp of relevant concepts.
Furthermore, should research participants want to
make contributions beyond the conceptual terms suggested by
the diagram, an additional and entirely blank
map was to be made available for use during the interviews.
Table 1: Two principals of decision-making map
1 Identifying related perspectives of the research topic under
investigation
2 Filling the spaces in the decision-making map
Also drawing from the original use of perceptual mapping in
Marketing research, the proposed decision-
making map instrument includes different axes that operate as
borders to the dominant research perspectives
extracted from the literature and earlier observations of the
phenomenon. However, the position in the
www.ejbrm.com 89 ©ACPIL
Electronic Journal of Business Research Methods Volume 12
Issue 2 2014
diagram where the informant decides to jot down ideas and
concepts (i.e. under which quadrant or section,
and how distant/close to the different axis) is not subject to
quantification or measurement.
For instance, if after conducting the literature review we are
able to identify three dominant perspectives
related to the phenomenon under investigation, the diagram to
be completed by the informant during the
interview process would resemble what is depicted in Figure 1.
This proposed instrument involves the informant in the process
of writing key terms which are related to the
phenomenon of research investigation based on the informant’s
perceptions in terms of where those key
terms should be placed.
It is expected that the process of filling the decision-making
map is completed through a series of questions
being asked by researcher that set the conversation in motion,
stimulating the negotiation of key terms that
are advanced by the informant and recorded in the decision-
making map.
This practice can lead to a deeper understanding of how
different dimensions affect the unfolding
understanding of the substantive research problem, facilitating
the identification of key themes and the
process of theory-building. Furthermore, the process empowers
the interviewer to ask more precise questions
concerning the series of elements identified and written-down
by the informant. There are also increased
opportunities for comparison across elements and dimensions
identified in each diagram.
Figure 1: A sample of decision making map
More significantly this practice provides an opportunity for
informant and interviewer to establish discussion
and social negotiations of meaning over the subjects under
investigation. We believe that this approach
enables the visualisation of facts that emerge as influential
themes according to the informants’ perceptions.
Finally, this approach can enhance the process of data analysis
and theory building, by enabling the production
and mapping of informant-led theoretical abstractions in the
form of themes and keywords that are positioned
in the diagram.
3. A use of decision-making map in information system research
To illustrate the use of the decision-making map, this section
describes the use of this instrument in the
context of an Information Systems (IS) research project. It
includes explanations about how the data collection
process developed, what kinds of challenges during the
application of the instrument were experienced, and
what kinds of techniques have been applied to mitigate the
difficulties encountered. The presentation of this
case includes an overview of project’s objectives, use of the
decision-making map and challenges of applying
the tool. The project sought to identify which elements
influence public sector (UK local councils)
administrators’ decision-making when executing e-Government
projects.
www.ejbrm.com 90 ISSN 1477-7029
Reza Mojtahed et al
3.1 Objectives of research project
The development and implementation of electronic government
(e-Government) have been studied since the
early introduction of the concept of e-Government to public
sector organisations. Various models and
approaches have been suggested to follow by public sector
administrators and researchers to investigate and
monitor the trends of providing new e-Government services to
the public. Different numbers of advantages
and complexities have been listed during practitioners’
engagement with the process of e-Government
development and implementation. Nonetheless, after more than
a decade of e-Government development and
advancement, recent e-Government studies indicate that the
public sector organisations in both developed
and developing countries have mostly achieved the preliminary
stages of the models (e.g. United Nations
2012). This finding leads us to question what issues or
elements have hindered the process of developing and
implementing e-Government services. To better understand this
issue from an IS perspective, we recognised
that the process of IS investment decision-making by public
sector administrators to provide new e-
Government services had to be studied. The existing literature
displays very general knowledge about this area
of study, since most of the currently available explanations on
IS project pertain mainly to private sector
organisations (Gauld 2007).
Due to the exploratory nature of this research, the selection of
an interpretative case study was deemed
appropriate (Walsham 1995). In addition, based on Yin’s
guidelines, if the research questions are categorised
into “how”, “what” and “why” questions, the focus of research
is on contemporary event and the researcher
has less control over events, the use of case study is the best
approach (Yin 2009). Furthermore, the use of
case study helps to obtain a holistic and in-depth knowledge in
regard to the phenomenon under investigation
(Pickard 2007, p.86).
In order to operationalise the objectives of this study, a local
city council in South Yorkshire - Sheffield local
council - agreed to participate in this research project as a case
study. In total 17 interviews (corresponding to
1040 minutes) took place with key stakeholders (i.e. senior,
middle and front line managers) in the process of
e-Government development and implementation decision-
making. The interview guide was designed to ask
interviewees to reflect on which actions they considered as
necessary and critical when deciding to provide a
new e-Government service. Furthermore, the informants were
asked to highlight elements that have impacted
on their decision making or the decision-making of their
colleagues when new services have been developed.
The semi-structured interview guides have been used in
conjunction with the decision making map as data
collection instruments. After completing the data collection
process, thematic analysis was used to code and
interpret data.
3.2 Application of decision-making map
Following the principles described in section 2, we have first
initiated the process of preparing for conducting
interviews and designing decision-making maps with a
sensitising review of the literature. The process of
reviewing the literature in the substantive area of e-Government
development and implementation led us into
the identification of 13 models of e-Government development
and implementation. Interestingly, among the
identified e-Government development models, 8 out of 13
models have been developed between 2000 and
2002. Some discussion of e-Government development models
and structures is inevitable, but nevertheless
useful to adequately deliver the objectives of this paper.
Since the phenomenon of e-Government development and
implementation was the subject of interest, the
most prominent models of e-Government development and
implementation were carefully studied and the
way e-Government can transform public sector organisation was
highlighted. As the result of reviewing e-
Government literature, four categories of changes in the public
sector organisation were identified when the
e-Government development and implementation is a matter. We
named those categories of changes as
organisational, operational, technological and socio-
environmental changes. This means that we identified
four perspectives that could be migrated into in the decision-
making map to facilitate data collection during
interviews.
The interview transcript was organised into three sections. The
first section included questions that asked
informants to describe the past and current e-Government
development and implementation in the local city
council. The second section was centred around the decision-
making map, and included questions that broadly
www.ejbrm.com 91 ©ACPIL
Electronic Journal of Business Research Methods Volume 12
Issue 2 2014
covered the four perspectives identified for the elaboration of
the decision-making map. Finally, the last series
of questions focused on determining how appropriate the
perspectives were to the specific context.
Therefore, the decision-making map was prepared based on four
perspectives, giving place to four quadrants
(each allocated to one perspective). In order to complete the
decision-making, informants were prompted to
reflect on the range of organisational, operational, socio-
environmental and technological perspectives that
affect local city council’s decision-making to provide new e-
Government services.
In case participants felt that that they wanted to contribute key
terms that could not be allocated into any of
the proposed quadrants, an additional and entirely blank map
was also available for use during the interviews.
After informants jotted down which elements they perceived to
be influential in the process of decision-
making, the researcher prompted discussion and further
elaboration on each of the concepts recorded by
informants on the diagram, using interrogations such as: “could
you explain the identified terms and
elements?”; “Why do you think these elements are important?”;
“Why did you put this element under this
perspective?”; “Does this element impact on or applies to other
perspectives?”.
Discussion, negotiation and co-construction of meaning
developed until both the researcher and the informant
felt that there were no further concepts or ideas to discuss.
Figure 2 illustrates one of the resulting decision-
making maps. As can be observed in the figure, the participant
identified a range of factors (e.g. financial
factors, business requirements, corporate strategy, customer
experience), inter-relations between them
(represented by arrows), and a set of stakeholders – written
down in green - that were perceived to be
engaged in the decision-making process (e.g. councillors,
central government, senior management).
Figure 2: Example of a decision making map where one
participant recorded perceived factors of e-
government service development
Following a process of inductive thematic data analysis, a set of
themes has been identified, representing
participants’ views on the range of factors that impact the
process of e-Government development and
implementation decision-making in Sheffield City Council. In
broad terms, the emergent factors can be
grouped in four major themes – organisational management
factors, government policy factors, financial
factors, and technological factors - with underlying sub-themes
displaying a multi-layered configuration, as
illustrated by the table presented in Appendix 1.
Factors recorded by
interviewee
Perspective Operational (OPR) Organisational (ORG)
Social & Environmental (S&E) Technological (TEC)
Axis
www.ejbrm.com 92 ISSN 1477-7029
Reza Mojtahed et al
3.3 Challenges of using decision-making map
During the course of the 17 interviews during which the
decision-making map was used the researchers were
not faced with strenuous difficulties. However, filling the map
was challenging for some informants. The most
significant challenge was the time required by some participants
to familiarise themselves with the diagram.
This challenge was easily solved by having the researchers
explain the purpose of the diagram, the process of
recording terms or themes, or providing assistance with filling
the map.
Informants were seldom more willing to engage in the
conversation than to record their ideas in more abstract
terms with the help of the diagram. In these cases the
researchers were aware of the need to respect
informants’ preferred way of expressing their thoughts. The
recommended course of action for these cases
would be for the researcher to start writing notes about key
terms mentioned by the interviewees and later on
discuss the highlighted materials, and inquiry about their
location on the map. It is important to note that this
implied making sure that the recording of terms and concepts
and their interpretation had been a true
reflection of informants’ discourse.
The positioning of elements and factors on the map may also
distract the informants’ cognitive process
because they may be excessively concerned about where the
elements need to be assigned. However, this
issue can be easily avoided by asking participants to highlight
the aspects or factors that they perceive as
relevant and then begin the process of negotiation to allocate
them into one of the quadrants.
Another issue may be the occurrence of informants who are so
deeply immersed in the process of identifying
factors that they forget to determine their location in the
quadrants. If the researchers interrupt them at that
moment, they may feel intimidated and this may interfere with
the thought flow process. In these
circumstances the advice would be to first let the informant
complete the identification of terms and elements
for all perspectives contained in the map, and subsequently
prompt discussion about their location and
internal relationships within and across the quadrants.
However, since the location of terms on the diagram is
important to the interpretation of data, the researchers
must avoid providing subjective suggestions to the informant.
This can be achieved with the use of laddering
interview techniques (Reynolds & Gutman 2001), more
specifically the use of ‘why’ questions that address the
reasons for their term choice and location preferences.
A possible limitation associated with the use of decision-
making maps may be the researchers feeling that it is
at times difficult to establish positive rapport with informants
when they are being asked to complete a task.
However it can be counter-argued that the instrumental
potentialities of the decision-making map are
empowering of interviewees’ ability to uncover root concepts,
and that empathy may be generated in the
process of negotiation to allocate terms into quadrants.
Another drawback is the potential difficulty in managing the
“essential tension in interviews” (Rapley, 2001),
that of balancing the need to collect data with a genuine
commitment to interactional involvement. This can
be minimised through using the map as an opportunity to engage
in the collaborative construction of a deep,
textured picture. The map is not a deterministic tool, but it can
certainly operate as a topic initiator and/ or as
an effective producer of follow-up questions.
Finally, informants’ disabilities may impede the process of
completing the diagram. Should this occur, the
researchers can assist the informant in the process of
completing the diagram through creating opportunities
to maximise discussion around key terms and their location in
the diagram.
4. Conclusion
In this paper, the need for methodological innovation along the
lines of the constructivist research paradigm is
emphasised. The novel methodological instrument outlined is a
decision-making map, in which a semi-
structured perceptual map - organised around literature review-
informed axis and quadrants - is used by the
investigators to promote interaction with informants in an
interview situation, to foster goal-oriented thinking,
and to operationalise social negotiation and co-production of
knowledge. By engaging the informant in the
identification of major perceptual themes this process gives the
informant the steer, which can prove
extremely helpful in improving the validity of qualitative
research that claims affiliation with constructivist
www.ejbrm.com 93 ©ACPIL
Electronic Journal of Business Research Methods Volume 12
Issue 2 2014
ontology. In practical terms, the decision-making map
operationalises an important tenet of constructivist
research – that of social negotiation of meaning – by operating
as an instrument for co-creation of
understandings between researcher and participant, at the
moment of data collection. It creates moments for
discussion and it allows the recording of the concepts that
participants chose as the best descriptors for their
perceptions. In so doing, it reverses the typical accountability
relations that develop in an interview encounter
and it increases the plausibility of analytical theorisations that
are not a monopoly of the researcher’s
interpretive capabilities.
Acknowledgement
The authors would like to thank the Sheffield City Council for
their participation in the study reported in this
paper.
References
Bryman, A., 2012. Social research methods 4th ed., New York:
Oxford University Press.
Charmaz, K. 2006. Constructing grounded theory, London:
Sage.
Cohen, L. & Manion, L., 1994. Research methods in education
4th ed., London: Routledge.
Collins, P., 1998. Negotiating Selves: Reflections on
“Unstructured” Interviewing. Sociological Research Online, 3
(3).
Available at: http://www.socresonline.org.uk/3/3/2.html.
Creswell, J.W., 2003. Research design : qualitative,
quantitative, and mixed methods approaches 2nd ed., CA: Sage:
Thousand Oaks.
Denzin, N.K. & Lincoln, Y.S. 2005. The Sage handbook of
qualitative research, London: Sage.
Gauld, R., 2007. Public sector information system project
failures: Lessons from a New Zealand hospital organisation.
Government information quarterly, 24 (1), pp.102–114.
Hauser, J.R. & Koppelman, F.S., 1979. Alternative perceptual
mapping techniques: relative accuracy and usefulness. Journal
of marketing Research, 16 (4), pp.495–506.
Hughes, J.A. & Sharrock, W.W., 2006. The philosophy of social
research, London: Longman Pub Group.
Kim, H. 1996. Perceptual mapping of attributes and preferences:
an empirical examination of hotel F&B products in Korea.
International Journal of Hospitality and Management, 15 (4),
pp.373-391.
Kholi, C.S. & Leuthesser, L. 1993. Product positioning: a
comparison of perceptual mapping techniques. Journal of
Product
& Brand Management, 2 (4), pp.10-19.
Kvale, S. & Brinkmann, S., 2009. InterViews: Learning the
Craft of Qualitative Research Interviewing 2nd ed., Thousand
Oaks, CA: Sage Publications, Incorporated.
Lilien, G.L. & Rangaswamy, A. 2004. Marketing engineering:
computer-assisted marketing analysis and planning, Victoria:
Trafford Publishing.
Mackenzie, N. & Knipe, S., 2006. Research dilemmas:
Paradigms, methods and methodology. Issues In Educational
Research, 16(2), pp.193–205. Available at:
http://www.iier.org.au/iier16/mackenzie.html
Maltz, A., Carter, J.R. & Maltz, E. 2011. How managers make
sourcing decisions about low cost regions: Insights from
perceptual mapping. Industrial Marketing Management, 40 (5),
pp.796-804.
May, T., 2003. Social Research: Issues, Methods and Research
3rd ed., Buckingham: Open University Press.
McNeill, P. & Chapman, S., 2005. Research methods 3rd ed.,
New York: Routledge.
Mills, J., Bonner, A. & Francis, K., 2006. The development of
constructivist grounded theory. International Journal of
Qualitative Methods, 5 (1),.pp 25-35
Pickard, A.J., 2007. Research methods in information, London:
Facet publishing.
Rapley, T.J. 2001. The art(fullness) of open-ended interviewing:
some considerations on analysing interview. Qualitative
Research, 1(3), pp.303-323.
Reynolds T.J., Gutman J.,2001. Laddering Theory, Method,
Analysis, and Interpretation. In T.J. Reynolds T.J. and J.C.
Olson
(eds), Understanding Consumer Decision Making –The Mean-
End Approach to Marketing and Advertising Strategy.
Hove, UK: Psychology Press, pp.25-62.
United Nations, 2012. United Nations E-Government Survey
2012: E-Government for the People, New York. Available at:
http://unpan1.un.org/intradoc/groups/public/documents/un/unpa
n048065.pdf.
Walsham, G., 1995. The Emergence of Interpretivism in IS
Research. Information systems research, 6 (4), pp.376–394.
Wen, C. & Yeh, W. 2010. Positioning of international air
passenger carriers using multidimensional scaling and
correspondence analysis. Transportation Journal, 49 (1), pp.7-
23.
Yin, R.K., 2009. Case study research: Design and methods 4th
ed., Thousand Oaks, CA: Sage Publications, Incorporated.
www.ejbrm.com 94 ISSN 1477-7029
Reza Mojtahed et al
Appendix 1 –
Emergent themes representing participants’ views on the range
of factors that impact the process of e-
Government development and implementation decision-making
in Sheffield City Council
www.ejbrm.com 95 ©ACPIL
Reproduced with permission of the copyright owner. Further
reproduction prohibited without
permission.
c.ADM_140187_20141101_00004_25705.pdf1. Introduction2. A
description of the development of decision-making map2.1
What is the perceptual map?2.2 A qualitative design of
decision-making map2.2.1 The processes of decision-making
map3. A use of decision-making map in information system
research3.1 Objectives of research project3.2 Application of
decision-making map3.3 Challenges of using decision-making
map4. ConclusionAcknowledgementReferencesAppendix 1 –
The Qualitative Report 2016 Volume 21, Number 1, How To
Article 2, 44-58
Translational Research Design:
Collaborating with Stakeholders for Program Evaluation
Kari Morris Carr
Indiana University, Bloomington, Indiana, USA
Jill S. Bradley-Levine
Ball State University, Muncie, Indiana, USA
In this article, the authors examine researcher collaboration
with stakeholders
in the context of a translational research approach used to
evaluate an
elementary school program. The authors share their experiences
as evaluators
of this particular program to demonstrate how collaboration
with stakeholders
evolved when a translational research approach was applied to
program
evaluation. Beginning with a review of literature regarding
stakeholder
participation in evaluation and other qualitative research, the
article reflects
on a method for conceptualizing participant involvement and
collaboration
within the translational framework. The relationship between
researchers and
stakeholders is articulated according to this method. We
interpose these
descriptions with their alignment to Petronio’s (2002, 2007)
five types of
practical validity for translational research. The paper ends with
a
consideration of what was learned throughout the evaluation
process, including
both successes and challenges, by means of the translational
model. Keywords:
Translational Research, Translational Validity, Participation in
Program
Evaluation, Collaborative Research
The translational research design represents a researcher’s
commitment to collaboration
with participants, and addresses issues of ethics and advocacy
that have been recognized in
established descriptions of qualitative research (Creswell, 2007;
Denzin & Lincoln, 2005; Fine,
Weis, Weseen, & Wong, 2000; Garner, Raschka, & Sercombe,
2006; Lincoln, Lynham, &
Guba, 2011; Korth, 2002; Smith & Helfenbein, 2009).
Specifically, translational research
represents an effort to translate findings into functional
solutions for research partners and
community members (Petronio, 2002). Yet the literature finds
that translational efforts are
neither easy nor occurring with great frequency (Maienschein,
Sunderland, Ankeny, & Robert,
2008; Petronio, 1999). In recent accounts, scholars have located
translational research within
the fields of communications and medicine in which discoveries
are driven (translated) toward
practical applications (Hamos, 2006; Petronio, 2007). In our use
of the term, both the process
(method) and product (outcome) characterize important aspects
of translational research,
particularly among the individuals with whom the researchers
collaborate: the local partners or
stakeholders. The evaluation project described in this article is
used to demonstrate how
translational research and collaboration with stakeholders
developed in the context of the
evaluation of an educational program. It is our goal to represent
the translational research
processes by sharing actual experiences in collaborating with a
specific evaluation partner.
However, we do not present results from actual data concerning
this evaluation.
This article recounts the relationship we developed while
working at a university-based
education research center with the Catholic diocese of a large
Midwestern city. The project
involved the evaluation of an after-school program established
to meet the educational needs
of children attending low-performing and high-poverty Catholic
schools. Though the initial
partnership developed out of the diocese’s need for program
evaluation, we identified this need
45 The Qualitative Report 2016
as an opportunity to forge a relationship with a community
partner and to contribute to the
existing body of research on after-school programs. The overall
mission of the university
research center was to use translational methods in all projects.
In practice, the approach was
two-fold. One facet consisted of the collaboration with
community partners for their immediate
research needs. The second included translation of research
results back to the field and to the
public. While traditional notions of research often focus on a
linear process in which faculty
researchers generate questions, conduct a study, and publish
results, the translational process
begins and ends with researcher and partner together at the table
co-leading the inquiry process
(Ortloff & Bradley-Levine, 2008; Petronio, 2002; Smith &
Helfenbein, 2009). In the current
case, the demand for university level research intersected with a
community partner’s need for
accountability and translated to products beneficial for the
partner, its program, participants,
the university, and academic community in general.
The translational methods described here are much like a
moving target. Indeed,
forming a true partnership is not considered an end in itself, but
rather an ongoing practice.
Partners aimed to learn from the other throughout the research
process and to better meet the
needs of the community as a result. Our case is no exception.
As such, we find it necessary to
describe some history of the field of translational research.
Next, we identify common
understandings of stakeholder involvement within evaluation
and qualitative research
literature, but note that we prefer the term “partner” to
“stakeholder” in order to draw attention
to the intended horizontal relationship we are cultivating with
the community. However, we
will use the terms “partner” and “stakeholder” interchangeably
given that the latter is more
commonly used in the selected literature. Lastly, we outline the
specific methods we utilized
in the translational research process, drawing on research
methodology across disciplines.
These methods are by no means a “how to” list for translational
research among community
partners, but rather describe what evolved “at the table” when
we came together with our
research partner.
Finally, while it is important to note that program evaluation is
a large piece in the
relationship between the research center we represented and the
diocese, it is just one part of
the translational relationship, and the emphasis of this article.
The goal of forging opportunities
for translational research is, indeed, to improve practice for
community partners—through the
work they need, but also through university research made
public—and to overtly engage local
stakeholders who are experts of their contexts in order to make
university resources relevant
and applicable to real community needs (Smith & Helfenbein,
2009).
Our case is but one example, and in writing this article, the
reflection process prompted
us to further define what we mean by “translation.” Thus, the
methods in translation described
here served a dual goal: to aid community partners in meeting
their need for
evaluation/research, and to extend current notions of qualitative
research for the purpose of
bringing the needs of the community to the fore of scholarship
(Petronio, 2002).
Literature: Approaches to Translation
Translational Research in Communications and Medicine
Both communications and medical research scholars have a
recent record of using
translational research in their respective fields. Petronio (2007)
and Maienschein et al. (2008)
acknowledge the more recent and popular focus bestowed upon
translational work through the
National Institutes of Health (NIH) and their “Roadmap for
Medical Research” issued in 2003,
in which the U.S. federal government called for scientists to
intensify their efforts to apply
medical results more rapidly than in the past. However, as early
as the mid-1990s, Petronio
(2007) described a commitment to “translating research into
practice” (p. 215). In other words,
Kari Morris Carr and Jill S. Bradley-Levine 46
she advocated a way for communications scholars to establish
methods of implementation that
would be “geared toward developing functional practices to
improve lives” (p. 215). There is
a subtle difference between the two fields’ treatment of the
word translation, though both
involve the increase of efforts toward bringing scholarship and
research to the clinical or
community places where the application of new knowledge is
most pressing.
Woolf (2008) refers to these two types of translational work in
the medical field as T1
and T2. T1 is identified as the “new methods for diagnosis,
therapy, and prevention and their
first testing in humans” as have been acquired from recent
laboratory work (p. 211). T2, on the
other hand, focuses on the intersection of the results from T1
with the “community and
ambulatory care settings” which involves a whole host of
unpredictable variables and
disciplines that characterize work with “human behavior and
organizational inertia” (pp. 211-
212). Simply put, T1 appears to be the actual drugs and
treatments that emerge from the lab,
while T2 refers to the ways in which the drugs and treatments
are accessed by the patients and
communities who need them. From a research perspective, T1
requires more quantitative
approaches such as experimental design whereas T2 benefits
from qualitative approaches
because the goal of T2 is to answer questions of why and how
communities and individuals
use the innovations developed through T1 research. Moreover,
what Petronio and
communication scholars have been calling “translating
scholarship/research into practice” for
over a decade closely resembles Woolf’s T2.
Petronio (2007) identified several types of translational
validity which address the
uncertainty of applying findings to practice and help further
define their contribution to the
field. These are “experience,” “responsive,” “relevance,”
“cultural,” and “tolerance” validities
(Petronio, 2007, p. 216). Each describes aspects and enactments
of communication to which
translational scholars must be attentive in achieving the goals of
translation. More specifically,
they explain the precise means for the researcher and the
stakeholder’s partnership in the
inquiry, and how these should proceed. The five types of
validity not only offer “criteria for
the admissibility of evidence” and ways to “align scholarship to
the translational process”
(Petronio, 2002, p. 511), but in our understanding they propose
how stakeholders and
researchers collaborate in research.
Experience validity recognizes the lived experience of the
research partners and
subjects. Responsive validity obliges researchers to remain
attentive to society’s changing
needs. Relevance validity ensures that value is placed “on the
issues important to target
populations,” making certain that community needs come first
when researchers are deciding
which questions to explore in their work (Petronio, 2002, p.
510). Cultural validity respects
both the ethnicities and customs of various cultural groups and
ensures that these serve as a
context for research translation. Lastly, tolerance validity
upholds the iterative research process
by recognizing “taken-for-granted phenomena that occur in
everyday life and passing that
understanding on to others” (p. 511).
In essence, we observe a strong correlation between
translational validity and
qualitative research (Petronio, 2002). The five types of validity
offer a way for qualitative
researchers to define their ontological and epistemological
views by means of the translational
approach. Many qualitative approaches acknowledge the social
negotiation of both the
researcher’s and participants’ views of reality (Creswell, 2007).
In this view, there is not one
reality, but a mutual perspective in which researcher and
participant (among others) collaborate
to build and share their respective understandings of their lived
experiences. Knowledge is
likewise generated through iterative and negotiated processes
within the shared research.
Petronio’s five types of validity assist the researcher in calling
attention to the many contexts
and reasons for keeping collaboration and negotiation at the
forefront of the research process.
Within Petronio’s five types of validity, researchers selecting
qualitative approaches can
recognize ways to describe, evaluate, and substantiate their
collaboration with stakeholders and
47 The Qualitative Report 2016
the community. They also aid the researcher in being attentive
to ways in which collaboration
ought to take place.
Likewise, the five types of validity (in varying ways) highlight
what we, through our
partnership with the diocese, have sought out in meeting their
needs based on their particular
circumstances, practices, cultures, and overall lives that existed
prior to our involvement, and
persisted after we left the field. Experience, cultural, and
tolerance validities are the most
applicable to our case of program evaluation. Each represents
the ways in which we continually
negotiated the terrain of translational work in the evaluation of
the after-school program
through a deep contextual understanding of our partner’s lived
experience and culture. Because
the relationship with community members is so integral to
translational work, we now turn to
the literature’s treatment of stakeholder participation in
evaluation and research to help address
the issue of researcher and community relationships.
Stakeholder Participation and Communication
More common notions of partner involvement in the literature
refer to degrees of
stakeholder participation within evaluation and academic
research. Taut (2008) reviewed
several researchers’ conceptions of stakeholder involvement
within evaluation research, in
particular, and found that there was no conclusion regarding
how many and to what degree
stakeholders should be involved in research. Nonetheless she
noted that all researchers believe
they should be engaged to some extent. In a widely-cited article
concerning types of
collaborative evaluation, Cousins and Whitmore (1998)
distinguished between two types of
participatory research, which they term “Practical-Participatory
Evaluation” (P-PE) and
“Transformative-Participatory Evaluation” (T-PE). In P-PE, the
evaluator leads most aspects
of the evaluation along with the participants, while T-PE
characterizes full stakeholder
involvement (Cousins & Whitmore, 1998; Fitzpatrick, Sanders,
& Worthen, 2010).
O’Sullivan and D’Agostino (2002) applied Cousins and
Whitmore’s framework and
further explained that utilization of findings is an important
consideration when debating the
role of participants in evaluation. They find that although some
participants believe that the
evaluator should be the one who moves forward with the
findings, most believe it is the
involvement of stakeholders that will increase utilization of an
evaluation (O’Sullivan &
D’Agostino, 2002). They also found that participation can be
loosely defined and must be
treated with caution. Simply providing program data can be
termed “participation,” but true
collaboration moves beyond data provision to imply the
“desired level of involvement”
(Fitzpatrick, Sanders, & Worthen, 2010; O’Sullivan &
D’Agostino, 2002, p. 373).
Similarly, stakeholder involvement is often dependent on the
desired outcomes of the
study (Taut, 2008). If there is a social justice goal regarding the
empowerment of participants,
then it is often the case that every stakeholder is involved and
the use of an evaluation’s results
becomes diminished. However, if the utilization of findings is
most pressing, the involvement
of fewer participants is often perceived as more beneficial to
the evaluation process (Taut,
2008). In either case, a belief in stakeholder contributions
places varying conceptions of
participation and the use of research outcomes at the center of
defining what collaboration in
evaluation means. We recognize the contribution of
translational research for its consideration
of participant/stakeholder contexts and study outcomes (Smith
& Helfenbein, 2009)
Some literature considers the many ways in which participants
ought to be involved in
research, both practically and ethically. These include roles in
participatory types of inquiry,
in challenging notions of hierarchy and power, and for the
contributions they make to the
research process (Fine, Weis, Weseen, & Wong, 2000; Garner,
Raschka, & Sercombe, 2006).
What translational research brings to bear on these levels of
understanding for participant
involvement is the idea of challenging current university
practice (Smith & Helfenbein, 2009).
Kari Morris Carr and Jill S. Bradley-Levine 48
What is confronted is the very formation of inquiry in the first
place. Translational researchers
use methods that seek to set community partners’ questions as
the guiding force for new
research, and emphasize the practice of collaboration and
reciprocity to simultaneously meet
the immediate needs of the community and university (Petronio,
2002).
Taken together, the literature summarizes varying conceptions
but lacks in making
actual methods of stakeholder collaboration explicit (O’Sullivan
& D’Agostino, 2002; Taut,
2008). The translational partnership described below sheds light
on ways stakeholders and
evaluators can work together in one type of qualitative research,
both to increase participation
on all sides and to illuminate a new method for carrying out
university research and evaluation.
Cunningham (2008) asserts that collaboration must foster
participation in ways that “remove
barriers between those who produce knowledge (researchers)
and those who use it
(practitioners)” (p. 375). Thus, we articulate understandings of
participatory research and
evaluation in the following table.
Table 1. Summary of Collaborative Research/Evaluation
Strategies and Elements of Inquiry
Principal Investigator
(PI)/Evaluator Role
Stakeholder
Involvement
Goal of Inquiry
Practical Participatory
Evaluation
Balanced leadership of
inquiry with
stakeholders, but ultimate
decision-making with PI.
Balanced involvement
in the inquiry process,
but ultimate decision-
making with PI.
PI and stakeholders
together determine
utilization of findings
locally.
Empowerment
Evaluation
PI is facilitator of the
inquiry.
Full involvement in
the inquiry and
decision-making
process.
Stakeholders
determine utilization
of findings with goal
of empowerment.
.
Translational
Research/Evaluation
Co-leads inquiry with
local stakeholders; Brings
university resources to
inform/support inquiry.
Expert of
evaluation/research
process.
Co-leads inquiry with
PI; Expert of the local
context.
PI and stakeholders
determine utilization,
application, and
publication of
findings; Ensures that
research outcomes
directly improve
stakeholders’ roles in
the community and
lives of the target
population in addition
to contributing to
wider body of
knowledge.
Adapted in part from Cousins and Whitmore (1998) and
Fitzpatrick, Sanders, and Worthen
(2011).
Common to all types of research and evaluation are the three
elements: principal
investigator (PI)/evaluator control, stakeholder involvement,
and the goal of the inquiry. Each
of the three types of research/evaluation summarized in the
table highlights different views of
the three elements. The principal investigator/evaluator controls
all aspects of research, shares
research decisions locally with stakeholders, or is a balance
between both. Research involves
all stakeholders in all aspects of research (e.g., transformative
evaluation), or only a select few
stakeholders in a small number of research decisions (e.g., some
types of participatory
evaluation). Lastly, the goal of the inquiry could be to forge a
partnership with stakeholders
within an organization (e.g., transformative evaluation), or for
results to be fed back into the
49 The Qualitative Report 2016
local organization when the research is complete (e.g.,
participatory evaluation). Most
important to our current work, however, are characteristics of
the third type: translational
research. Translational research maintains many of the aspects
of the types above, but also
acknowledges that both the evaluator and stakeholder are
experts of their own contexts. It
works toward bringing together the best of research and practice
in order to further the goals
of the community within the framework of university research
such as in our case.1 In sum,
stakeholders and the researcher both participate and contribute
to the inquiry, and the results
of research are to be applicable to the community organization
and published in a manner that
makes the findings practical and available to the wider
academic and public community.
Translational Methods
Enacting Translational Research through Partnership
The partnership between the research center and the diocese
began in the spring of 2007
when the after-school program director approached the director
of our center to discuss the
diocese’s need for a more meaningful evaluation of their
program. The center’s translational
research model required that researchers “be invited into a
position where [they] are able to
describe (or retell) events, as well as the rationale for decisions
from the organization’s point
of view” (see Smith & Helfenbein, 2009). The diocese’s need
and our expertise opened the
door for a collaborative partnership. The diocese was then
applying for grant renewal to fund
their program and sought opportunities for on-going formative
feedback that would impact
program implementation and quality, and the potential for the
program director to contribute
to the evaluation design and process. Our first task was to
create the evaluation plan for the
diocese’s grant narrative. Pivotal to this task was the
development of research questions which
were crafted from the after-school program’s goals. Secondly,
we sought approval to work with
human subjects from our university’s institutional review board
(IRB), which ensured our
research provided the necessary documentation, safeguards, and
transparency to assist in
ensuring participants’ privacy and protection.
Once the diocese reviewed and provided feedback to our
evaluation plan and the IRB
approved our protocol, the research team began the process of
understanding the after-school
program and how it fit into the program’s goals and mission
(Fitzpatrick, Worthen, & Sanders,
2011), reflective of Petronio’s (2002, 2007) experience validity
and cultural validity. As part
of this team, the authors explored the diocesan website,
reviewed curricular materials from the
program and schools, and attended staff trainings as participant
observers. These activities
allowed us to “take into account the lived through experience of
those we [were] trying to
understand” (Petronio, 2002, p. 509). After the initial work in
seeking to better understand the
origin and mission of our community partner, the research team,
led by one of the authors,
entered the field and began in-depth observations of the
program’s summer camp. During this
time, it was essential that team members engaged with the staff
to establish a “supportive, non-
authoritarian relationship” in order to increase trust and get to
know more about the program
without being intrusive (Carspecken, 1996, p. 90). To
accomplish this, the team often ate lunch
with the staff during site visits to the camp, and we also made
ourselves visible to the staff each
day. This prolonged engagement, represented through the length
of time we were in contact
with the staff and students, as well as the number of hours we
observed the program served to
“heighten the researcher’s capacity to assume the insider’s
perspective” (Carspecken, 1996, p.
141). It also represented validation to the program director that
we were committed to the
1 University-based research may not always be the locus for the
primary investigator, but it is noted that this was
the original intent when Petronio (1999) wrote of translating
“scholarship” into practice. University research is
what we mean when we discuss our roles as researchers and
evaluators within the university research center.
Kari Morris Carr and Jill S. Bradley-Levine 50
project and willing to invest significant amounts of time and
energy in order to “build trust,
learn the culture, and check for misinformation” (Creswell,
2007, p. 207). The trust built during
the initial months of the partnership led to what Smith and
Helfenbein (2009) refer to as “shared
decision-making /generating inquiry questions, which
involve[d] a pushback against pure
objectivity or self-proclaimed independence” (p. 93). In short,
the collaborative process began
as a result of early trust building and prolonged engagement,
representing aspects of experience
and cultural validity and the larger frame surrounding the
participants’ experiences
(Carspecken, 1996; Petronio, 2002).
Collaborative Evaluation Design
Because the research center was hired to evaluate the after-
school program, questions
regarding what the program wanted to know were decided upon
in agreement with the program
director and the research lead, a position in which both authors
served. This aspect of the
translational process most aptly reflects relevance validity as
we desired to place value on the
program’s needs and to use their knowledge and descriptions of
the issues that were important
to them (Petronio, 2002). The researchers saw the staff and
partners located within the schools
and the community as the authorities of their environments; as a
result, we had the opportunity
to collaboratively develop appropriate methods in order to
answer the most vital questions
driven by program needs.
Working in concert, the research lead and the program director
adopted a modified
version of the Extended-Term Mixed-Method Evaluation
(ETMM) design (Chatterji, 2005,
including the following components: a long-term time-line; an
evaluation guided by the
program’s purposes; a deliberate incorporation of formative,
summative, and follow-up data
collection and analysis; and rigorous quantitative and
qualitative evidence. This method of
analysis was preferred by the directors and researchers at our
university research center for its
deliberately flexible, yet specific, methodology that permitted
transformation over time, in
response to program changes and growth. The ETMM design
also enabled the team to
effectively combine formative and summative data points within
the appropriate timelines. For
example, formative data reporting was more useful to program
staff mid-way through the
academic year and in our informal monthly meetings, whereas
summative information
concerning student data (i.e., program attendance and analysis
of standardized test scores) was
valuable at the year’s end for both state and local reporting. The
key data points included
observations, interviews, focus group discussions, surveys, and
student-level data including
test scores, grades, and attendance records. Although the
research lead usually directed the
initial development of protocols and surveys, these instruments
were shared at various points
of development with the program director, which afforded
opportunities for her to include
questions she needed or wanted to ask. Additionally, because
we could not “presume we
[knew] what [was] best for [our community partners] or how to
best address their… needs,”
program effectiveness and implementation questions changed
with each year of the grant, and
we met regularly with the program director to ensure that the
research and evaluation were
meeting the concerns of each grant year (Petronio, 2002, p.
510). The selection of the ETMM
design for program evaluation likewise supported this type of
flexibility (Chatterji, 2005).
Participatory Observations
Petronio (2002) found that qualitative methods are often more
conducive to the aims of
the five types of translational validity. The use of qualitative
participant observations in our
research privileged both the experiences and culture of the
participants and the surrounding
organizations within the diocese’s after-school program. After
the summer camp came to an
51 The Qualitative Report 2016
end, researchers made plans to begin evaluating the after-school
programs held in seven sites
serving over 700 students for the academic year. Because the
evaluation of the after-school
program was a much larger undertaking than what was offered
during the summer, the research
team began site visits by watching from a distance, careful to
observe each program
component, and student and staff interaction in their natural
settings. However, after a short
time, we returned to the participant observer paradigm in order
to help build trust with
participants, as well as to yield a participant’s perspective of
the program (Creswell, 2008;
Petronio 2002, 2007). We began offering our assistance to
students during the time allocated
for homework help, which built rapport with the students while
offering an extra set of hands
to reduce the staff’s workload. Working with the students on
homework also gave us
opportunities to talk to participants in order to discover
important insights regarding their
experiences. As participant observers we were able to build
credibility with the program staff,
who noticed that members of the research team were fellow
educators and/or parents. As a
result, they welcomed us more readily into their buildings,
which helped the research proceed
more efficiently. We visited each of the schools where the after-
school program took place
between four and eight times each semester during each school
year.
The research team also utilized interviews and focus group
discussions, which probed
the “layered subjectivity” of participants, allowing them to
discover and revise their initial
thoughts and emotions through each stage of the research
(Carspecken, 1996, p. 75). Our
familiarity with the program and the trust we built with
participants including staff, students,
and parents, during extensive observations permitted them to
give, what we believed to be,
candid responses to interview and focus group prompts. For
example, given the option to turn
off the recorder so that a critical remark would be “off the
record,” many participants chose to
leave the recorder on, showing that they trusted we would not
only maintain their
confidentiality, but that we understood the context of their
comments. We found that staff
members were more likely to share complaints with us when
they knew that the information
would be passed to the program director anonymously. This
represents an important ethical
consideration central to translational methodology in which we
attempted to “place greater
value on the issues that [were] important for [the] target
population” (Petronio, 2002, p. 510).
These honest exchanges enabled the diocese’s program director
to offer assistance and
problem-solve with the after-school staff throughout the year.
The trust in our research team that program staff developed
during the evaluation
supported our efforts to conduct balanced focus group
discussions with parents as well.
Although staff members were responsible for recruiting parents
to participate in the discussions
and we might have expected that they would invite only those
parents who were pleased with
the program, we rarely held a discussion with a group of parents
who made only positive
contributions. Rather, staff wanted to hear the constructive
feedback from parents they knew
were not perfectly satisfied, and they believed that we would
utilize this data to help them
improve the program.
In addition to the qualitative data collection discussed above,
the research team and
program director co-designed staff, student, and parent surveys
to assure that as many
stakeholders as possible were given the opportunity to share
their perceptions of the program,
highlighting our commitment to the ideal that the research serve
a relevant purpose for all
populations involved (Petronio, 2002). Surveys were
administered during the fall and spring
of each academic year. Before each administration period,
members of the research team and
the program director collaborated in a review of the surveys to
determine whether revisions to
questions needed to be made or new topics of interest should be
probed. Program staff usually
administered surveys, which were available online and on paper.
Parent surveys were also
translated into Spanish by a staff member.
Kari Morris Carr and Jill S. Bradley-Levine 52
Ongoing Formative Feedback
Because data collection occurred almost continually throughout
the length of the multi-
year grant period, formative feedback was both expected and
needed by the program director
and staff. The research team utilized the constant comparative
analysis model (Glaser &
Strauss, 1967), which allowed us to engage in continual analysis
whereby themes emerged,
developed, and changed. Several months of data collection,
usually over a naturally occurring
time frame such as a semester or summer vacation were
followed by short, but intensive
analysis. Emerging themes were reported to the program
director and staff via formative
feedback reports. These served as member checks because the
director and staff were invited,
and even expected, to offer their perspectives on the findings.
Reports typically went through
at least two rounds of revisions as a result of these member
checks.
The diversity of the research team facilitated the constant
comparative analysis process
and helped address issues of cultural validity through our
appreciation of the local ethnicities,
customs, and routines of the after-school program, staff, and
students (Petronio, 2002). As
mentioned previously, a number of team members were former
teachers with experience and
therefore, expertise working with students in the grade levels
that the program served.
However, the diverse backgrounds of other team members also
contributed to the overall team
perspective. For example, a social work major was also a
graduate of one of the schools within
the program; she was able to provide a community perspective
to our analysis. Another team
member was an international student who offered a more global
analytic perspective. Also,
because of her outgoing and kind personality she was admired
by the children in the program.
Other team members included psychology majors, higher
education graduate students, and
sociology majors. The diversity present in the research team
facilitated internal debate and
perspective taking that we believe would not have occurred
within a homogeneous team, and
which facilitated the translational research process from partner
development and evaluation
design through data collection, analysis, and cultural awareness.
From the start of this project, we explicitly strove to keep lines
of communication open
and transparent. To this end, we made our analysis process as
understandable as possible by
including the program director in various analysis sessions,
which provided another
opportunity for member checking and for disclosing both ours
and our partners’ biases and
values (Petronio, 2002). This sharing allowed us to be clear
about the ways the evaluation
unfolded and to make the research process accessible to
members of the after-school program
staff. However, this open communication was complicated at
times. For example, at various
points during our partnership we were asked to share
confidential information such as
identifying a staff member who we observed doing something
that the program director found
unproductive. At these moments, we had to find ways to balance
our commitment to preserve
confidentiality with the program director’s need for impartial
information. But it was at these
instances of tension that we believe the trust we had built
through our partnership allowed us
to engage in conversations where we shared, and learned from,
our different perspectives.
Another form of member checking occurred as a result of our
regular communication
with staff at each site. Our bi-monthly visits allowed us to serve
as a vehicle for facilitating
interaction among the sites as well as checking our findings. We
often shared successes that
we observed with sites that were struggling or looking for new
ideas, while staff provided us
with information about the students, schools, and communities
they served. In these ways, our
exchange resulted in greater understanding of the context for
the research team and increased
knowledge sharing (Petronio, 2002) among the sites through our
informal reports and continual
communication.
53 The Qualitative Report 2016
Learning from Translation
Our experience with translational research has positioned us
toward demonstrating that
“shared ownership of the research process present[ed]
conditions for empowerment and
create[d] a dynamic exchange of ideas about how to best
implement and study an
intervention/program” (Smith & Helfenbein, 2009). We say
“positioned” because translational
research represented an ideal in some respects. Yet it is a type
of research within which we find
worth and value. Still a moving target, our understanding of
translational evaluation and
research resonated with Petronio’s (2007) notion of naming this
kind of research a “challenge”
(p. 216). Her five types of practical validity for translational
work provided us with an explicit
framework for facilitating stakeholder participation in our
research. Because we sought to
understand our partner’s lived experience throughout the
evaluation process, we achieved some
aspects of shared knowledge, and also came up against some
difficulties. While in the field as
participant observers, for example, we made efforts to build
positive relationships with our
participants, which helped us transcend certain difficulties.
Highlighting Petronio’s (2002, 2007) experience validity, our
data collection was
fostered within the context of the program’s current practice.
And although our proximity to
the site staff “as they enacted [their work]” permitted us access
to the lived experience of the
after-school program, we might have been lacking in other types
of Petronio’s translational
validity because we did face some challenges in “transform[ing]
findings into meaningful
outcomes” (p. 216). However, because of our attention to the
experience and practice of our
partners, we felt that our shared trust facilitated tackling issues
that were difficult or
uncomfortable for either the program staff or the research team
members. An illustration of
this challenge is depicted below.
At one site, it seemed as though the more research team
members shared data with staff
members, the more strained our relationship became. The site
director and program staff began
to view us more as “external evaluators” than as partners and
were less likely to respond
positively to our presence at their sites. In addition, shortly
after our mid-year reports were
disseminated, we had a sense that the site director or program
staff members were scrambling
to show us “what the evaluators want to see” rather than a
typical program day. The site director
and staff were also sometimes concerned because we came on
the “wrong day” and were not
going to see their program at its “best.” To alleviate these
tensions, we continually reassured
staff that we were seeing many positive things happening at
their site. We would often name
specific strengths of their program or remind them that during
previous visits we had seen many
positive elements. When faced with areas in need improvement,
we shared ideas that we had
seen implemented at other sites that might help them improve.
In addition, we started to ask
upon arrival whether there were particular activities that the site
director wanted us to see that
day. This allowed the site director and staff to show us their
best and helped put them at ease
concerning whether we would see what they had hoped. For her
part, the site director became
much more direct about telling us what we missed last week or
yesterday, and began to share
stories about program elements of which she felt proud. Other
site directors also shared their
concerns with the program director, who was able to
communicate some of these to us on their
behalf. The nature of our ongoing communication with the
program director and site directors
gave us many opportunities to directly address the tensions, and
work toward finding realistic
and empowering solutions as quickly as possible. It also
enabled us to become more responsive
in the way we communicated with the after-school program staff
as a whole “to be receptive
to human conditions” and sensitive to the manner in which our
communication affected staff
behavior (Petronio, 2002, p. 510).
The above tensions reflect one challenge in attempting to
involve all staff members
relative to the utilization of research and evaluation findings.
Cousins and Whitmore’s (1998)
Kari Morris Carr and Jill S. Bradley-Levine 54
delineation between practical-participatory and transformative-
participatory evaluation applies
to our difficulties in that not all program staff were entirely
enmeshed in the present evaluation.
The diocese’s program director and each of the seven site
directors for the after-school
programs were our main contacts for collaboration. Site staff
members were involved on a
more cursory basis, and usually in response to the program
director’s request for assistance in
the evaluation. In accord with O’Sullivan and D’Agostino’s
(2002) description, site staff
members were “participants,” but recall this term is often used
loosely. Merely permitting us
access to the program at their respective sites, site staff were
participating.
In seeking to understand why some of our findings were
received with tension by site
staff, we considered again the five types of translational
validity as described by Petronio
(2002, 2007). In addition to the need to address the limited
participation of site staff, Petronio’s
tolerance validity points out our probable deficiency in
“honoring existing patterns when [we]
bring research into practice” (p. 216). With our main
communication residing with the overall
program director, our findings were not well received on
occasion because they passed through
the program director first before proceeding to the site
directors. Had we better addressed
tolerance validity, we would have been more cautious and
cognizant of the intersection
between the evaluation results and the sites where the research
took place. This junction of
communication must be a place where we, as translators of
research, position ourselves and the
research to be more collaboratively interpreted and presented.
In hindsight, we should have
offered a work session where site directors and staff were
invited to view the research and
discuss findings and implications with the research team before
creating a collaborative report.
Another significant characteristic of the research to which we
had been attentive
concerned the hierarchical relationships between the program
director, site directors, and staff.
Though we, as the research team, fit somewhere between the
program director and site
directors, we constantly found ourselves searching for ways to
“work the hyphen” in our
researcher-participant relationships (Fine, Weis, Weseen, &
Wong, 2000, p. 108). We cast the
positivist notion of “objective expert” aside in favor of adopting
an approach of solidarity in
which we hoped to have “[undergone] an important shift, from
that of an outside appraiser to
that of a collaborator” (Cunningham, 2008, p. 375). In sum, we
hoped to truly collaborate with
our partner. Yet, as explored in this article, this is an aspect of
our translational process that
experienced both success and tension. Our frequent site visits
and the participant observation
paradigm we followed facilitated our mutual respect in the
field. However, because the
diocese’s program director led the collaboration efforts with the
research team leaders, the
researchers’ relationship with site staff appeared unbalanced at
times (though most site visits
proceeded smoothly). Additionally, both authors are former
educators in schools similar to the
ones served by the after-school program, and our own
backgrounds likely influenced our
interactions with the sites and their staff, such as in
recommending program changes based on
our prior experiences. However, our goal as translators of
research into practice compels us to
discover more appropriate methods for collaborating with all
staff. As we move forth, we must
echo Petronio’s (2002) call for increased communication in
order to apply “new ways of
conceptualizing a problem [and] make our work more accessible
to the people who are not in
academia” (p. 511). In this way, we will be able to truly
understand the context in which staff
members interact not only with our findings, but also with us as
partners in the research process.
Limitations
There were some notable limitations to the translational
research approach in our
evaluation study. Aside from the challenges noted above in
“learning from translation,” several
limitations existed due to the fact that as researchers for a
university center, we had been hired
to complete a specific program evaluation for the seven school-
based, after-school programs.
55 The Qualitative Report 2016
Because our employment at the research center depended on the
funding generated from the
program evaluation, we were limited in some respects by the
evaluation requirements.
Additionally, some after-school site staff members hesitated to
participate in the evaluation
beyond the provision of data; most after-school staff members
worked other jobs and were paid
little (Halpern, 2003) Thus, we understood their trepidation
when they declined to invest more
time in a collaborative research project beyond their current
capacities as after-school staff
members. Most of our collaboration took place with the after-
school program director who was
our point person for the evaluation contract. In retrospect, we
would have valued building
autonomy and leadership from the ground level up with each
after-school site staff member,
but this would include altering (somewhat radically) the job
descriptions of these individuals.
A final limitation concerns our desire to work more
intentionally in the results and
implementation phase of our research, something which our
evaluation proposal did not fully
encompass at the academic year’s end. In order to truly work
toward the translational research
ideal, our results must press toward practicality, functionality,
and program quality
improvement (Petronio, 2002). This may include redefining
some traditional evaluator
functions in the future (i.e., extensive data analyses and
summative reporting) in favor of
participating in collaborative quality improvement teams that
work more closely with
community partners within formative data collection and
application paradigms (M.H. King,
personal communication, May 28, 2013).
Implications and Conclusion
The collaborative research processes that we utilized through
the enactment of
translational research are relevant and important for all
qualitative researchers. In writing this
article, we set about demonstrating how collaboration with
stakeholders during the research
process can contribute to authentically translational outcomes.
In our case, the program
director, site directors, staff members, students, and parents
participated at various levels in the
design, data collection, and analysis processes. As a result, we
saw findings and
recommendations acted upon despite various imperfections in
the process. Our close
communication with the program director and site directors
assisted in ensuring that the context
for collaboration and translation was in position. Throughout
the data collection, analysis, and
reporting procedures, we approximated the true partnership both
we and the diocese desired.
The second piece of our translational research endeavor
consisted of the practical application
and dissemination of findings. In addition to informal meetings
and formative feedback
throughout the academic year, this article itself is another
instance of our commitment to
advancing research methodology within the wider community.
Petronio’s five types of validity address how we consider
translational researchers
should engage with partners and work to translate findings into
practice. They draw attention
to the experiences, history, customs, values, and existing
patterns of participants within both
translational processes and products. Also important was
studying the relationships within the
process of implementing the translational product. How we
presented our evaluation report to
after-school staff members, for example, was no less important
than the evaluation work itself.
Care for the people and places with whom we work, and care for
those who will use our
findings is necessary for translation to occur. Table 1 fails to
provide a description of the
products of various research models, or to demonstrate whether
an outcome or product is
important at all. This area requires further research.
Translational research highlights the
process of the partnership, but also points toward a product and
the means for putting that
product into practice. The other cells in the table do not make
products of the research explicit,
and if they do, such as when Taut (2008) described the
usefulness of evaluation, the
Kari Morris Carr and Jill S. Bradley-Levine 56
partnerships among researchers and stakeholders were given
less importance in an effort to
come up with a practical product.
Figure 1 below highlights what we have discovered to be
integral components to our
translational research work. The first concerns the relocation of
university research into
community spaces, and the concern for the eventual translation
of findings into practical
solutions for community partners. The application of findings
concerns both the local context
and also the larger academic community. The second important
feature involves the continuous
reflection of translational methods in terms of Petronio’s five
types of translational validity.
Lastly and perhaps most importantly, is the notion of
community partnership, and approaching
this partnership in a collaborative manner. Through the ongoing
collaborative partnership, the
researcher(s) and community members take advantage of each
other’s knowledge and
resources in the co-construction of research questions and
within the research process itself.
Figure 1. Features of a Translational Research Model
Finally, Petronio’s (2002) discussion of objectivity within
translational research
illustrates that our work is not value-free; however, we must be
willing to examine how our
own values and subjectivities overlap with those of our research
partners. Here, “if we want to
work toward scholarship translation, we have to be clear on the
way the values of those being
researched and the researcher’s values intersect” (Petronio, p.
511). This moves us beyond just
“not interfering” (Petronio, p. 511) with the customs of our
stakeholders. In this way, we find
translational research challenging at best; yet our struggles do
not preclude or outweigh that
we also find it to be the most ethical and rewarding manner to
approach our work. We are
working with relationships that are tenable and evolving, and
despite our best efforts to be full
collaborators, tensions and imbalances are an inevitable aspect
of the process that we must
acknowledge and value. Furthermore, what we do have is the
understanding that the
relationship in which we participate is ongoing, is not an end in
itself, and through the trust and
communication we have built, we have hope that the process
will continue into the future for
the good of the partnership, the education programs served, and
the community.
Collaborative Translational Methodology
Practical
application
of findings
University
research
made public
Five types of Translational Validity
Experience
Responsive
Relevance
Cultural
Tolerance
Ongoing Collaborative Community
Partnership
Co-constructed
research questions
Researchers and
participants co-lead the
research process
57 The Qualitative Report 2016
References
Carspecken, P. F. (1996). Critical ethnography in educational
research: A theoretical and
practical guide. New York, NY: Routledge.
Chatterji, M. (2005). Evidence on “what works”: An argument
for extended-term mixed
method (ETMM) evaluation designs. Educational Researcher,
34, 14-24.
Cousins, J. B., & Whitmore, E. (1998). Framing participatory
evaluation. New Directions for
Program Evaluation, 80, 5-23.
Creswell, J. W. (2007). Qualitative inquiry and research design:
Choosing among five
traditions (2nd ed.). Thousand Oaks, CA: Sage.
Creswell, J. W. (2008). Educational research: Planning,
conducting, and evaluating
quantitative and qualitative research (3rd ed.). Upper Saddle
River, NJ: Pearson.
Cunningham, W. S. (2008). Voices from the field: Practitioner
reactions to collaborative
research. Action Research, 6, 373–390.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2005). The SAGE
handbook of qualitative research
(3rd ed.). Thousand Oaks, CA: Sage.
Fine, M., Weis, L., Weseen, S., & Wong, L. (2000). Qualitative
research, representations, and
social responsibilities. In N. K. Denzin & Y. S. Lincoln (Eds.),
Handbook of qualitative
research (2nd ed., pp.107-131). Thousand Oaks, CA: Sage.
Fitzpatrick, J. L, Sanders, J. R., & Worthen, B. R. (2011).
Program evaluation: Alternative
approaches and practical guidelines. Upper Saddle River, NJ:
Pearson Education, Inc.
Garner, M., Raschka, C., & Sercombe, P. (2006).
Sociolinguistic minorities, research, and
social relationships. Journal of Multilingual and Multicultural
Development, 27(1), 61-
78.
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument
Team Diagnostic Survey Development Instrument

More Related Content

Similar to Team Diagnostic Survey Development Instrument

Survey research and Sampling
Survey research and SamplingSurvey research and Sampling
Survey research and SamplingVivek Shahi
 
Research Instrument, Development & Analysis-The Questionnaire
Research Instrument, Development & Analysis-The Questionnaire Research Instrument, Development & Analysis-The Questionnaire
Research Instrument, Development & Analysis-The Questionnaire ShaharyarShoukatShou
 
S7 quantitative #2 2019
S7 quantitative #2 2019S7 quantitative #2 2019
S7 quantitative #2 2019collierdr709
 
SURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptSURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptRavi Kumar
 
Glossary
GlossaryGlossary
Glossaryasfawm
 
Assignment Surveys and R Assignment Surveys and Response R.docx
Assignment Surveys and R Assignment Surveys and Response R.docxAssignment Surveys and R Assignment Surveys and Response R.docx
Assignment Surveys and R Assignment Surveys and Response R.docxmckellarhastings
 
Writing research chapter three, the research methods
Writing research chapter three, the research methodsWriting research chapter three, the research methods
Writing research chapter three, the research methodsResearchWap
 
Implementing Data collection Plan.pptx
Implementing Data collection Plan.pptxImplementing Data collection Plan.pptx
Implementing Data collection Plan.pptxChinna Chadayan
 
Survey research
Survey researchSurvey research
Survey researchshakirhina
 
SURVEY KARO.ppt
SURVEY KARO.pptSURVEY KARO.ppt
SURVEY KARO.pptRavi Kumar
 
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptx
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptxCHAPTER 15-HOW TO WRITE CHAPTER 3.pptx
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptxDonnaVallestero3
 
Methodology 2.pptx
Methodology 2.pptxMethodology 2.pptx
Methodology 2.pptxMarcCollazo1
 
Project Estimation Techniques And Methods For The Data...
Project Estimation Techniques And Methods For The Data...Project Estimation Techniques And Methods For The Data...
Project Estimation Techniques And Methods For The Data...Jennifer Baker
 

Similar to Team Diagnostic Survey Development Instrument (20)

Survey
SurveySurvey
Survey
 
Survey research and Sampling
Survey research and SamplingSurvey research and Sampling
Survey research and Sampling
 
Brm unit iii - cheet sheet
Brm   unit iii - cheet sheetBrm   unit iii - cheet sheet
Brm unit iii - cheet sheet
 
Research Instrument, Development & Analysis-The Questionnaire
Research Instrument, Development & Analysis-The Questionnaire Research Instrument, Development & Analysis-The Questionnaire
Research Instrument, Development & Analysis-The Questionnaire
 
S7 quantitative #2 2019
S7 quantitative #2 2019S7 quantitative #2 2019
S7 quantitative #2 2019
 
SURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptSURVEY_RESEARCH.ppt
SURVEY_RESEARCH.ppt
 
Glossary
GlossaryGlossary
Glossary
 
Assignment Surveys and R Assignment Surveys and Response R.docx
Assignment Surveys and R Assignment Surveys and Response R.docxAssignment Surveys and R Assignment Surveys and Response R.docx
Assignment Surveys and R Assignment Surveys and Response R.docx
 
7week1.pdf
7week1.pdf7week1.pdf
7week1.pdf
 
SurveyMETHOD.pptx
SurveyMETHOD.pptxSurveyMETHOD.pptx
SurveyMETHOD.pptx
 
Techniques
TechniquesTechniques
Techniques
 
Survey Resaerch
Survey ResaerchSurvey Resaerch
Survey Resaerch
 
Writing research chapter three, the research methods
Writing research chapter three, the research methodsWriting research chapter three, the research methods
Writing research chapter three, the research methods
 
Implementing Data collection Plan.pptx
Implementing Data collection Plan.pptxImplementing Data collection Plan.pptx
Implementing Data collection Plan.pptx
 
Tallink
TallinkTallink
Tallink
 
Survey research
Survey researchSurvey research
Survey research
 
SURVEY KARO.ppt
SURVEY KARO.pptSURVEY KARO.ppt
SURVEY KARO.ppt
 
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptx
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptxCHAPTER 15-HOW TO WRITE CHAPTER 3.pptx
CHAPTER 15-HOW TO WRITE CHAPTER 3.pptx
 
Methodology 2.pptx
Methodology 2.pptxMethodology 2.pptx
Methodology 2.pptx
 
Project Estimation Techniques And Methods For The Data...
Project Estimation Techniques And Methods For The Data...Project Estimation Techniques And Methods For The Data...
Project Estimation Techniques And Methods For The Data...
 

More from audeleypearl

Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docxMr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docxaudeleypearl
 
Movie Project Presentation Movie TroyInclude Architecture i.docx
Movie Project Presentation Movie TroyInclude Architecture i.docxMovie Project Presentation Movie TroyInclude Architecture i.docx
Movie Project Presentation Movie TroyInclude Architecture i.docxaudeleypearl
 
Motivation and Retention Discuss the specific strategies you pl.docx
Motivation and Retention Discuss the specific strategies you pl.docxMotivation and Retention Discuss the specific strategies you pl.docx
Motivation and Retention Discuss the specific strategies you pl.docxaudeleypearl
 
Mother of the Year In recognition of superlative paren.docx
Mother of the Year         In recognition of superlative paren.docxMother of the Year         In recognition of superlative paren.docx
Mother of the Year In recognition of superlative paren.docxaudeleypearl
 
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docx
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docxMrs. G, a 55 year old Hispanic female, presents to the office for he.docx
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docxaudeleypearl
 
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docxMr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docxaudeleypearl
 
Mr. B, a 40-year-old avid long-distance runner previously in goo.docx
Mr. B, a 40-year-old avid long-distance runner previously in goo.docxMr. B, a 40-year-old avid long-distance runner previously in goo.docx
Mr. B, a 40-year-old avid long-distance runner previously in goo.docxaudeleypearl
 
Moving members of the organization through the change process ca.docx
Moving members of the organization through the change process ca.docxMoving members of the organization through the change process ca.docx
Moving members of the organization through the change process ca.docxaudeleypearl
 
Mr. Friend is acrime analystwith the SantaCruz, Califo.docx
Mr. Friend is acrime analystwith the SantaCruz, Califo.docxMr. Friend is acrime analystwith the SantaCruz, Califo.docx
Mr. Friend is acrime analystwith the SantaCruz, Califo.docxaudeleypearl
 
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docxMr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docxaudeleypearl
 
Motor Milestones occur in a predictable developmental progression in.docx
Motor Milestones occur in a predictable developmental progression in.docxMotor Milestones occur in a predictable developmental progression in.docx
Motor Milestones occur in a predictable developmental progression in.docxaudeleypearl
 
Most women experience their closest friendships with those of th.docx
Most women experience their closest friendships with those of th.docxMost women experience their closest friendships with those of th.docx
Most women experience their closest friendships with those of th.docxaudeleypearl
 
Most patients with mental health disorders are not aggressive. Howev.docx
Most patients with mental health disorders are not aggressive. Howev.docxMost patients with mental health disorders are not aggressive. Howev.docx
Most patients with mental health disorders are not aggressive. Howev.docxaudeleypearl
 
Most of our class readings and discussions to date have dealt wi.docx
Most of our class readings and discussions to date have dealt wi.docxMost of our class readings and discussions to date have dealt wi.docx
Most of our class readings and discussions to date have dealt wi.docxaudeleypearl
 
Most people agree we live in stressful times. Does stress and re.docx
Most people agree we live in stressful times. Does stress and re.docxMost people agree we live in stressful times. Does stress and re.docx
Most people agree we live in stressful times. Does stress and re.docxaudeleypearl
 
Most of the ethical prescriptions of normative moral philosophy .docx
Most of the ethical prescriptions of normative moral philosophy .docxMost of the ethical prescriptions of normative moral philosophy .docx
Most of the ethical prescriptions of normative moral philosophy .docxaudeleypearl
 
Most healthcare organizations in the country are implementing qualit.docx
Most healthcare organizations in the country are implementing qualit.docxMost healthcare organizations in the country are implementing qualit.docx
Most healthcare organizations in the country are implementing qualit.docxaudeleypearl
 
More work is necessary on how to efficiently model uncertainty in ML.docx
More work is necessary on how to efficiently model uncertainty in ML.docxMore work is necessary on how to efficiently model uncertainty in ML.docx
More work is necessary on how to efficiently model uncertainty in ML.docxaudeleypearl
 
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docx
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docxMortgage-Backed Securities and the Financial CrisisKelly Finn.docx
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docxaudeleypearl
 
Moral Development  Lawrence Kohlberg developed six stages to mora.docx
Moral Development  Lawrence Kohlberg developed six stages to mora.docxMoral Development  Lawrence Kohlberg developed six stages to mora.docx
Moral Development  Lawrence Kohlberg developed six stages to mora.docxaudeleypearl
 

More from audeleypearl (20)

Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docxMr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
 
Movie Project Presentation Movie TroyInclude Architecture i.docx
Movie Project Presentation Movie TroyInclude Architecture i.docxMovie Project Presentation Movie TroyInclude Architecture i.docx
Movie Project Presentation Movie TroyInclude Architecture i.docx
 
Motivation and Retention Discuss the specific strategies you pl.docx
Motivation and Retention Discuss the specific strategies you pl.docxMotivation and Retention Discuss the specific strategies you pl.docx
Motivation and Retention Discuss the specific strategies you pl.docx
 
Mother of the Year In recognition of superlative paren.docx
Mother of the Year         In recognition of superlative paren.docxMother of the Year         In recognition of superlative paren.docx
Mother of the Year In recognition of superlative paren.docx
 
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docx
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docxMrs. G, a 55 year old Hispanic female, presents to the office for he.docx
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docx
 
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docxMr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
 
Mr. B, a 40-year-old avid long-distance runner previously in goo.docx
Mr. B, a 40-year-old avid long-distance runner previously in goo.docxMr. B, a 40-year-old avid long-distance runner previously in goo.docx
Mr. B, a 40-year-old avid long-distance runner previously in goo.docx
 
Moving members of the organization through the change process ca.docx
Moving members of the organization through the change process ca.docxMoving members of the organization through the change process ca.docx
Moving members of the organization through the change process ca.docx
 
Mr. Friend is acrime analystwith the SantaCruz, Califo.docx
Mr. Friend is acrime analystwith the SantaCruz, Califo.docxMr. Friend is acrime analystwith the SantaCruz, Califo.docx
Mr. Friend is acrime analystwith the SantaCruz, Califo.docx
 
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docxMr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
 
Motor Milestones occur in a predictable developmental progression in.docx
Motor Milestones occur in a predictable developmental progression in.docxMotor Milestones occur in a predictable developmental progression in.docx
Motor Milestones occur in a predictable developmental progression in.docx
 
Most women experience their closest friendships with those of th.docx
Most women experience their closest friendships with those of th.docxMost women experience their closest friendships with those of th.docx
Most women experience their closest friendships with those of th.docx
 
Most patients with mental health disorders are not aggressive. Howev.docx
Most patients with mental health disorders are not aggressive. Howev.docxMost patients with mental health disorders are not aggressive. Howev.docx
Most patients with mental health disorders are not aggressive. Howev.docx
 
Most of our class readings and discussions to date have dealt wi.docx
Most of our class readings and discussions to date have dealt wi.docxMost of our class readings and discussions to date have dealt wi.docx
Most of our class readings and discussions to date have dealt wi.docx
 
Most people agree we live in stressful times. Does stress and re.docx
Most people agree we live in stressful times. Does stress and re.docxMost people agree we live in stressful times. Does stress and re.docx
Most people agree we live in stressful times. Does stress and re.docx
 
Most of the ethical prescriptions of normative moral philosophy .docx
Most of the ethical prescriptions of normative moral philosophy .docxMost of the ethical prescriptions of normative moral philosophy .docx
Most of the ethical prescriptions of normative moral philosophy .docx
 
Most healthcare organizations in the country are implementing qualit.docx
Most healthcare organizations in the country are implementing qualit.docxMost healthcare organizations in the country are implementing qualit.docx
Most healthcare organizations in the country are implementing qualit.docx
 
More work is necessary on how to efficiently model uncertainty in ML.docx
More work is necessary on how to efficiently model uncertainty in ML.docxMore work is necessary on how to efficiently model uncertainty in ML.docx
More work is necessary on how to efficiently model uncertainty in ML.docx
 
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docx
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docxMortgage-Backed Securities and the Financial CrisisKelly Finn.docx
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docx
 
Moral Development  Lawrence Kohlberg developed six stages to mora.docx
Moral Development  Lawrence Kohlberg developed six stages to mora.docxMoral Development  Lawrence Kohlberg developed six stages to mora.docx
Moral Development  Lawrence Kohlberg developed six stages to mora.docx
 

Recently uploaded

URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 

Recently uploaded (20)

URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 

Team Diagnostic Survey Development Instrument

  • 1. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Team Diagnostic Survey: Development of an Instrument Wageman, Ruth;Hackman, J Richard;Lehman, Erin The Journal of Applied Behavioral Science; Dec 2005; 41, 4; ProQuest pg. 373 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further
  • 2. reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
  • 3. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
  • 4. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Lesson Week 3 This week we work with data analysis. The evaluator will need to explain how data will be collected and analyzed to their stakeholders. Most of us, myself included, are not experts in math or statistics. The point of this lesson is not to make you an
  • 5. expert but to expose you to the simple data analysis methods available and to the terminology should you be involved in a project that requires a higher level of evaluation so you are better able to understand and judge the accuracy of the results. First, a statistic can be a number, characteristic, tool or technique. The data used can be measured by some simple central tendency characteristics. Examples of these are mode, mean and median. There are also measures of variability such as range, percentiles, standard deviation, and variance. For a good example of how this works in "real life" take a look at Making Data Collection of Statewide Programs Useful. Another aspect that deserves to be mentioned is meta-analysis. This is an analysis of effect size based on the quantitative results of multiple previous studies of the same or similar interventions. Shown at this website is an online lecture on how to conduct meta-analysis. Finally, take the time to view or listen to the Transparency Through Technology Evaluating Federal Open Government. This will give you an idea of the importance of data and evaluation in the Federal Government. Data.gov offers data sets from which you can pull for evaluation purposes. This week we also look at surveys and sampling. In order to gather the data necessary to analyze the problem or issue, the researcher could gather data from existing sources, such as the US Census or they could conduct their own surveys. Other means of gathering data are interviews with key informants, focus groups, and incidence rates. Many times a sample survey will be give to a small representative group and extrapolated out to the entire population to product projections of the results on a larger audience. (think about how they determine television show ratings) At times the researcher may not be familiar with the topic or area and might conduct what is called a "snowball" sample where they as the person being interviewed to suggest others that might help. When I was interviewed by a PhD student for her research she asked if I could put her in touch with others that had the background I did. That is an example of
  • 6. a snowball survey. For our purposes here, we will concentrate on surveys and specifically the survey instruments used by the NPS for their visitor counts. Part Two Lesson Week Three Data can be both qualitative (generally numeric) and quantitative (generally from observations or opinion surveys). You will also review the four levels of data: nominal (one level), ordinal (one level but ranked), interval (ranked data conveyed in equal intervals) and ration (add an absolute zero to the equation). The Science Museum of Minnesota produces a very informative website discussing evaluation data in a clear and simple format. Take a look at their website and explore the links. In order to look at the analysis performed on the data we have looked at so far, take a look at the NPS Visitor Use Statistics website. For this part of the lesson choose California. When you are done make sure to explore the parks in your state! Now choose the Golden Gate National Recreation Area.You will see a series of reports. Click on the Traffic Counts by location. Remember those areas where they showed the Inductive loop counter at the entrance to Lower Fort Mason? Then the amount subtracted as non-reportable? And finally the application of the multiplier from our previous lesson? Now you see the result. This is the data produced by those surveys. Now you have the data to make month-to-month comparisons as well as year-to- year. How might these stats be used? During 2015 the National Parks were asked for this information among others requests: Consulting firm wanted assistance interpreting visitation data for a development proposal for a tribe Bureau of Ocean Energy Management wanted 5 years of visitation data for some Alaska parks Glacier National Park wanted to identify their 100 millionth
  • 7. visitor A travel magazine wanted to know the ranking (by visitation) for a particular park •Co Next take a look at the Annual Park Recreation Visitation Graph. Is the visual representation easier to follow? More difficult? Review the other reports. Presentation of the data is almost as important as the data itself. · Week Three Visitor Survey · Following is a sample survey instrument for the Persons Per Vehicle Survey we have been discussing: INSTRUCTIONS FOR ENTERING DATA ON THE PERSONS- PER-VEHICLE SURVEY FORM <PARK NAME> 1. This survey will help your park establish a person per vehicle multiplier to be used with traffic counts for estimating the number of people entering the park by vehicle. · The surveyor conducts the survey for only one (1) hour during the sample period at each of the sample locations that is open on the day of the survey. · If a survey time period is marked AM please conduct a one hour survey between the hours of 8:00 AM and 12:00 PM. · If the survey time period is marked PM please conduct a one hour survey between the hours of 12:00 PM and 5:00 PM. · The surveyor selects the AM or PM time period when he/she can safely and completely conduct one hour survey at the required location on the day listed on the accompanying survey
  • 8. calendar. · Please vary your start times during each AM or PM sample period. Start times do not need to be on the hour but do need to be conducted for a one hour period. 2. The surveyor fills out the bottom of the form by entering their name, the date of the survey and the time the survey begins. 3. To fill out the body of the form, count the number of people in each vehicle (boats or autos) as they enter the park. Place a tally mark in the appropriate box representing the number of persons in that vehicle. (If there are 2 persons in a vehicle, put a tally mark in column two (2). If there are more than 6 passengers in a vehicle, put the exact number in column 7+. 4. At the end of each month being surveyed send the completed forms to: Name of Person to Contact National Park Service NRSS/EQD/SSB 1201 Oakridge Drive Fort Collins, CO 80525 If you have any questions, please contact Person's name at the above address or phone 970-225-XXXX. Thank you for your cooperation · Week ThreeSurvey Tally
  • 9. Equipping the Constructivist Researcher: The Combined use of Semi-Structured Interviews and Decision-Making maps Reza Mojtahed, Miguel Baptista Nunes, Jorge Tiago Martins and Alex Peng Information School, University of Sheffield, Sheffield, UK [email protected][email protected][email protected][email prote cted] Abstract: An interview is a technique used by qualitative researchers to elicit facts and knowledge about the phenomenon under investigation using a series of interview questions. Nonetheless, the establishment of conversation and negotiation of meaning during the interview process is still challenging for those who conduct interviews, no matter how skilled or experienced researchers are with the process. It is felt in particular that researchers would benefit from the use of an instrument that, in the course of semi-structured interviews, would foster an environment where the ideas and meanings conveyed by informants could be developed and further discussed in order to achieve a deeper understanding of the phenomenon under investigation. Therefore, this paper attempts to develop and introduce decision-making maps as a new instrument to be used during the process of conducting semi- structured interviews. This newly proposed instrument is inspired by the concept and practice of perceptual mapping. The paper discusses the rationale for proposing the development and application of decision-making map in the context of semi-structured interviews, and reflects on the range of implications for the researcher, for participants, and for the practice of qualitative research that claims affiliation with constructivism. Keywords: inductive research, constructivism, qualitative
  • 10. interview, perceptual mapping, decision-making map 1. Introduction Cohen & Manion (1994, p.36) described the constructivist approach to research as being based on understanding the world of human experiences. This world of experiences is continuously shaped through the human interaction with objects and other subjects. In order to access and achieve an understanding about human perceptions, one of the main requirements of the constructivist approach is the establishment of a reciprocal and communicational ground between the research project participants and researchers in the co- construction of meaning. Eventually this would lead to building a theory that is based on the experiences of researchers and that of research participants (Mills et al. 2006). Several authors have discussed the use of constructivist epistemological principles in inductive research. The constructivist paradigm traditionally follows qualitative research methods, although quantitative methods may also be used in support of qualitative data (Mackenzie & Knipe 2006). Since constructivist researchers tend to rely on participants’ viewpoints about the situations under investigation (Creswell 2003, p.8), the vast majority of inductive research remains interview-based and interpretivist in nature. Accordingly, the use of interviews as a data collection method in inductive research is justified by its affinity with daily-life conversations and the centrality of interactions, exchanges, and negotiation of meaning between two parties (Kvale & Brinkmann 2009), which corresponds to constructivist approaches to research. There are different approaches to carry out an interview, although the dominant characterisation of interviews
  • 11. is based on the dichotomy between structured and unstructured interviews (Collins 1998). However, more varieties of interview styles have been recognised by researchers (e.g. May 2003, p.121), such as semi- structured interview, group interview and focus group interview. Each one of these types follows their own approach to conduct an interview and to collect the research data. A clear sign of the differences between each type of interview styles is the way the interview questions are formulated and the amount of freedom given to interviewees in their replies to each interview question (e.g. Bryman 2012). Nonetheless, the operationalization of qualitative interviews’ underlying epistemological principles remains complex and at times controversial. Whether the researcher applies a semi-structured or unstructured interview, there is an unconditional principle that researchers need to adhere to during the interview process, which is the capacity of maintaining ISSN 1477-7029 87 ©ACPIL Reference this paper as: Mojtahed R, Baptista Nunes M, Tiago Martins J and Peng A. “Interviews and Decision-Making maps.”The Electronic Journal of Business Research Methods Volume 12 Issue 2 2014 (pp 87-95), available online at www.ejbrm.com mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] Electronic Journal of Business Research Methods Volume 12 Issue 2 2014
  • 12. social negotiation of meanings between the interviewee and interviewer. This component is somehow missing or underachieved during the operationalization of research that claims a constructivist affiliation. In addition, most of the tenets of modern inductive approaches such as thematic analysis, grounded theory or even phenomenography are predicated in listening to informants’ perceptions of the social world around them, interpreting them and producing a theory that attempts to generate a context-bound understanding. This process contains an inherent artificiality since researchers are trying to understand social worlds by interpreting informants’ perceptions without any feedback loop that enables negotiation and validation of the adequacy of the interpretation. To address the challenge of researchers’ exclusive reliance on the interpretation of interview evidence to construct their studies, this paper proposes the introduction of a methodological innovation in semi-structured interview design: the use of decision-making maps that help both the researcher and the informant negotiate meaning, define data and advance interpretations in a collaborative fashion. This is particularly important in the context of qualitative research that is aligned with a constructivist conceptualisation of knowledge – one that asserts that researchers must rely upon the "participants' views of the situation study” (Creswell 2003, p.8). The difficulty in constructivist research is exactly demonstrating that the participants’ view of the situation as reported in research findings is not simply the result of researchers’ interpretive whim, and that negotiation of meaning has in fact
  • 13. occurred. This is intimately related to what Denzin and Lincoln (2005) describe as constructivism’s subjectivist epistemology, in the sense that knower and respondent are co-creators of understandings (Denzin and Lincoln 2005). The use of decision-making maps in conjunction with semi-structured interviews, as advocated in this paper, enhances and materialises the opportunities for co-creation of understandings between researcher and participant, at the moment of data collection. Further explanation of this process and its main stages are advanced in the following sections. A detailed description of the rationale and process of decision-making maps is provided in Section 2. Section 3 discusses the stages, difficulties (and how they were overcome) and implications of applying the instrument to an empirical context: a single case study of a UK local city council’s decision-making process concerning new IS projects. The paper closes with a recommendation to use decision-making maps in the process of semi- structured interviews to promote interaction with informants, to foster goal-oriented thinking, and to operationalise social negotiation and co-production of knowledge. 2. A description of the development of decision-making map “Researching a problem is a matter of using the skills and techniques appropriate to do the job required within practical limits: a matter of finely judging the ability of a particular research tool to provide the data required and itself a skill” (Hughes & Sharrock 2006, p.12). This section describes the process of developing decision- making maps as a data collection instrument to be
  • 14. used in conjunction with semi-structured interviews. A review of perceptual mapping and its uses in Marketing research is provided, since the idea to design decision-making maps for interpretive, interview-based research stemmed from this field. This is followed by a detailed explanation of the structure and process of the proposed data collection instrument. 2.1 What is the perceptual map? Within Marketing research perceptual maps have been known as a powerful technique which is used for designing new products, advertising, determining good retail locations and developing several other marketing solutions and applications (Hauser & Koppelman 1979). Examples of the use perceptual mapping in Marketing research include Kim’s (1996) perceptual mapping of hotel food and beverages’ attributes and preferences, Wen and Yeh’s (2010) investigation of customers’ perceptions and expectations concerning international air passenger carriers, or Maltz et al.’s (2011) investigation of sourcing managers' perceptions of low cost regions. In general terms, perceptual mapping techniques help organisations understanding how their products are perceived by consumers in relation to the different products in the marketplace. Perceptual mapping techniques aim at producing a diagrammatic representation of that perceptual space occupied by organisations (Kholi and Leuthesser, 1993). A typical perceptual map will feature the following characteristics: pair-wise distances between product alternatives that indicate how closely related products are according to www.ejbrm.com 88 ISSN 1477-7029
  • 15. Reza Mojtahed et al customers’ understanding; a vector on the map that “geometrically denote[s] attributes of the perceptual map”; and axes that suggest “the underlying dimensions that best characterise how customers differentiate between alternatives” (Lilien and Rangaswamy, 2004:119). Perceptual maps’ dominant approach to collect and analyse data on consumers’ perceptions of products is objectivist, developing in most cases via attribute based methods (factor analysis) or similarity based methods (multi-dimensional scaling). 2.2 A qualitative design of decision-making map A central claim advanced in this paper is that some features of perceptual mapping techniques can be developed in qualitative research designs, in conjunction with semi-structured interviews, provided that the researchers take into account the particularities of “verbatim conversation” that occurs between interviewee and interviewer as the way to find answers for questions such as “how” and “why” (McNeill & Chapman 2005, p.22). Unlike the perceptual mapping techniques used in Marketing research, the priority is not extracting meaning from numerical approaches and statistical analysis of the social facts, or applying multidimensional scaling and factor analysis to construct a perceptual map. Furthermore, the perceptual map as used in Marketing research is the outcome of a technique consisting of detailed procedures, whereas the qualitative decision-making
  • 16. map proposed in this paper consists of an instrument designed to collect information during the semi- structured interview. The perceptual map as used in Marketing research includes three characteristics. Firstly, it has pair-wise distances between product alternatives that specify how close or far the products are in the mind of customers. Secondly, it has a vector on the map that indicates magnitude and direction in the geographical space of map. Finally, it displays axes of the map that show the underlying dimensions that characterise how alternatives are differentiated by customers. Based on these fundamental characteristics, we developed the decision-making map as a data collection instrument. The details pertaining to the operationalization of the proposed instrument are advanced in the following sections. An empirical application of the instrument is discussed in section 3. 2.2.1 The processes of decision-making map Departing from the core principles of perceptual mapping, we have designed a new instrument, which should be used during the semi-structured interview process. In practical terms, it requires both interviewees’ and interviewer’s engagement in producing a diagram on a sheet of paper (e.g. A3 size) provided by the researcher. Although consisting mostly of blank space where the informant is expected to jot down concepts and ideas, the diagram provides quadrants organised according to dominant research perspectives that will be used as the bases for discussion and conversation between informant and researcher. The selection of dominant
  • 17. perspectives is informed by the review of literature in the substantive area of research study and by earlier observations of the phenomenon under study. The literature review helps the researcher identify sensitising concepts and points of departure – not strict perspectives that could detract the researcher’s attention from emergent data. Accordingly and in a similar perspective to that advocated by Charmaz (2006), the literature review helps to demonstrate grasp of relevant concepts. Furthermore, should research participants want to make contributions beyond the conceptual terms suggested by the diagram, an additional and entirely blank map was to be made available for use during the interviews. Table 1: Two principals of decision-making map 1 Identifying related perspectives of the research topic under investigation 2 Filling the spaces in the decision-making map Also drawing from the original use of perceptual mapping in Marketing research, the proposed decision- making map instrument includes different axes that operate as borders to the dominant research perspectives extracted from the literature and earlier observations of the phenomenon. However, the position in the www.ejbrm.com 89 ©ACPIL Electronic Journal of Business Research Methods Volume 12 Issue 2 2014
  • 18. diagram where the informant decides to jot down ideas and concepts (i.e. under which quadrant or section, and how distant/close to the different axis) is not subject to quantification or measurement. For instance, if after conducting the literature review we are able to identify three dominant perspectives related to the phenomenon under investigation, the diagram to be completed by the informant during the interview process would resemble what is depicted in Figure 1. This proposed instrument involves the informant in the process of writing key terms which are related to the phenomenon of research investigation based on the informant’s perceptions in terms of where those key terms should be placed. It is expected that the process of filling the decision-making map is completed through a series of questions being asked by researcher that set the conversation in motion, stimulating the negotiation of key terms that are advanced by the informant and recorded in the decision- making map. This practice can lead to a deeper understanding of how different dimensions affect the unfolding understanding of the substantive research problem, facilitating the identification of key themes and the process of theory-building. Furthermore, the process empowers the interviewer to ask more precise questions concerning the series of elements identified and written-down by the informant. There are also increased opportunities for comparison across elements and dimensions identified in each diagram.
  • 19. Figure 1: A sample of decision making map More significantly this practice provides an opportunity for informant and interviewer to establish discussion and social negotiations of meaning over the subjects under investigation. We believe that this approach enables the visualisation of facts that emerge as influential themes according to the informants’ perceptions. Finally, this approach can enhance the process of data analysis and theory building, by enabling the production and mapping of informant-led theoretical abstractions in the form of themes and keywords that are positioned in the diagram. 3. A use of decision-making map in information system research To illustrate the use of the decision-making map, this section describes the use of this instrument in the context of an Information Systems (IS) research project. It includes explanations about how the data collection process developed, what kinds of challenges during the application of the instrument were experienced, and what kinds of techniques have been applied to mitigate the difficulties encountered. The presentation of this case includes an overview of project’s objectives, use of the decision-making map and challenges of applying the tool. The project sought to identify which elements influence public sector (UK local councils) administrators’ decision-making when executing e-Government projects. www.ejbrm.com 90 ISSN 1477-7029 Reza Mojtahed et al
  • 20. 3.1 Objectives of research project The development and implementation of electronic government (e-Government) have been studied since the early introduction of the concept of e-Government to public sector organisations. Various models and approaches have been suggested to follow by public sector administrators and researchers to investigate and monitor the trends of providing new e-Government services to the public. Different numbers of advantages and complexities have been listed during practitioners’ engagement with the process of e-Government development and implementation. Nonetheless, after more than a decade of e-Government development and advancement, recent e-Government studies indicate that the public sector organisations in both developed and developing countries have mostly achieved the preliminary stages of the models (e.g. United Nations 2012). This finding leads us to question what issues or elements have hindered the process of developing and implementing e-Government services. To better understand this issue from an IS perspective, we recognised that the process of IS investment decision-making by public sector administrators to provide new e- Government services had to be studied. The existing literature displays very general knowledge about this area of study, since most of the currently available explanations on IS project pertain mainly to private sector organisations (Gauld 2007). Due to the exploratory nature of this research, the selection of an interpretative case study was deemed appropriate (Walsham 1995). In addition, based on Yin’s guidelines, if the research questions are categorised into “how”, “what” and “why” questions, the focus of research
  • 21. is on contemporary event and the researcher has less control over events, the use of case study is the best approach (Yin 2009). Furthermore, the use of case study helps to obtain a holistic and in-depth knowledge in regard to the phenomenon under investigation (Pickard 2007, p.86). In order to operationalise the objectives of this study, a local city council in South Yorkshire - Sheffield local council - agreed to participate in this research project as a case study. In total 17 interviews (corresponding to 1040 minutes) took place with key stakeholders (i.e. senior, middle and front line managers) in the process of e-Government development and implementation decision- making. The interview guide was designed to ask interviewees to reflect on which actions they considered as necessary and critical when deciding to provide a new e-Government service. Furthermore, the informants were asked to highlight elements that have impacted on their decision making or the decision-making of their colleagues when new services have been developed. The semi-structured interview guides have been used in conjunction with the decision making map as data collection instruments. After completing the data collection process, thematic analysis was used to code and interpret data. 3.2 Application of decision-making map Following the principles described in section 2, we have first initiated the process of preparing for conducting interviews and designing decision-making maps with a sensitising review of the literature. The process of reviewing the literature in the substantive area of e-Government development and implementation led us into the identification of 13 models of e-Government development and implementation. Interestingly, among the
  • 22. identified e-Government development models, 8 out of 13 models have been developed between 2000 and 2002. Some discussion of e-Government development models and structures is inevitable, but nevertheless useful to adequately deliver the objectives of this paper. Since the phenomenon of e-Government development and implementation was the subject of interest, the most prominent models of e-Government development and implementation were carefully studied and the way e-Government can transform public sector organisation was highlighted. As the result of reviewing e- Government literature, four categories of changes in the public sector organisation were identified when the e-Government development and implementation is a matter. We named those categories of changes as organisational, operational, technological and socio- environmental changes. This means that we identified four perspectives that could be migrated into in the decision- making map to facilitate data collection during interviews. The interview transcript was organised into three sections. The first section included questions that asked informants to describe the past and current e-Government development and implementation in the local city council. The second section was centred around the decision- making map, and included questions that broadly www.ejbrm.com 91 ©ACPIL Electronic Journal of Business Research Methods Volume 12 Issue 2 2014
  • 23. covered the four perspectives identified for the elaboration of the decision-making map. Finally, the last series of questions focused on determining how appropriate the perspectives were to the specific context. Therefore, the decision-making map was prepared based on four perspectives, giving place to four quadrants (each allocated to one perspective). In order to complete the decision-making, informants were prompted to reflect on the range of organisational, operational, socio- environmental and technological perspectives that affect local city council’s decision-making to provide new e- Government services. In case participants felt that that they wanted to contribute key terms that could not be allocated into any of the proposed quadrants, an additional and entirely blank map was also available for use during the interviews. After informants jotted down which elements they perceived to be influential in the process of decision- making, the researcher prompted discussion and further elaboration on each of the concepts recorded by informants on the diagram, using interrogations such as: “could you explain the identified terms and elements?”; “Why do you think these elements are important?”; “Why did you put this element under this perspective?”; “Does this element impact on or applies to other perspectives?”. Discussion, negotiation and co-construction of meaning developed until both the researcher and the informant felt that there were no further concepts or ideas to discuss. Figure 2 illustrates one of the resulting decision-
  • 24. making maps. As can be observed in the figure, the participant identified a range of factors (e.g. financial factors, business requirements, corporate strategy, customer experience), inter-relations between them (represented by arrows), and a set of stakeholders – written down in green - that were perceived to be engaged in the decision-making process (e.g. councillors, central government, senior management). Figure 2: Example of a decision making map where one participant recorded perceived factors of e- government service development Following a process of inductive thematic data analysis, a set of themes has been identified, representing participants’ views on the range of factors that impact the process of e-Government development and implementation decision-making in Sheffield City Council. In broad terms, the emergent factors can be
  • 25. grouped in four major themes – organisational management factors, government policy factors, financial factors, and technological factors - with underlying sub-themes displaying a multi-layered configuration, as illustrated by the table presented in Appendix 1. Factors recorded by interviewee Perspective Operational (OPR) Organisational (ORG) Social & Environmental (S&E) Technological (TEC) Axis www.ejbrm.com 92 ISSN 1477-7029 Reza Mojtahed et al 3.3 Challenges of using decision-making map During the course of the 17 interviews during which the decision-making map was used the researchers were not faced with strenuous difficulties. However, filling the map was challenging for some informants. The most significant challenge was the time required by some participants to familiarise themselves with the diagram. This challenge was easily solved by having the researchers explain the purpose of the diagram, the process of recording terms or themes, or providing assistance with filling the map. Informants were seldom more willing to engage in the
  • 26. conversation than to record their ideas in more abstract terms with the help of the diagram. In these cases the researchers were aware of the need to respect informants’ preferred way of expressing their thoughts. The recommended course of action for these cases would be for the researcher to start writing notes about key terms mentioned by the interviewees and later on discuss the highlighted materials, and inquiry about their location on the map. It is important to note that this implied making sure that the recording of terms and concepts and their interpretation had been a true reflection of informants’ discourse. The positioning of elements and factors on the map may also distract the informants’ cognitive process because they may be excessively concerned about where the elements need to be assigned. However, this issue can be easily avoided by asking participants to highlight the aspects or factors that they perceive as relevant and then begin the process of negotiation to allocate them into one of the quadrants. Another issue may be the occurrence of informants who are so deeply immersed in the process of identifying factors that they forget to determine their location in the quadrants. If the researchers interrupt them at that moment, they may feel intimidated and this may interfere with the thought flow process. In these circumstances the advice would be to first let the informant complete the identification of terms and elements for all perspectives contained in the map, and subsequently prompt discussion about their location and internal relationships within and across the quadrants. However, since the location of terms on the diagram is important to the interpretation of data, the researchers
  • 27. must avoid providing subjective suggestions to the informant. This can be achieved with the use of laddering interview techniques (Reynolds & Gutman 2001), more specifically the use of ‘why’ questions that address the reasons for their term choice and location preferences. A possible limitation associated with the use of decision- making maps may be the researchers feeling that it is at times difficult to establish positive rapport with informants when they are being asked to complete a task. However it can be counter-argued that the instrumental potentialities of the decision-making map are empowering of interviewees’ ability to uncover root concepts, and that empathy may be generated in the process of negotiation to allocate terms into quadrants. Another drawback is the potential difficulty in managing the “essential tension in interviews” (Rapley, 2001), that of balancing the need to collect data with a genuine commitment to interactional involvement. This can be minimised through using the map as an opportunity to engage in the collaborative construction of a deep, textured picture. The map is not a deterministic tool, but it can certainly operate as a topic initiator and/ or as an effective producer of follow-up questions. Finally, informants’ disabilities may impede the process of completing the diagram. Should this occur, the researchers can assist the informant in the process of completing the diagram through creating opportunities to maximise discussion around key terms and their location in the diagram. 4. Conclusion In this paper, the need for methodological innovation along the lines of the constructivist research paradigm is
  • 28. emphasised. The novel methodological instrument outlined is a decision-making map, in which a semi- structured perceptual map - organised around literature review- informed axis and quadrants - is used by the investigators to promote interaction with informants in an interview situation, to foster goal-oriented thinking, and to operationalise social negotiation and co-production of knowledge. By engaging the informant in the identification of major perceptual themes this process gives the informant the steer, which can prove extremely helpful in improving the validity of qualitative research that claims affiliation with constructivist www.ejbrm.com 93 ©ACPIL Electronic Journal of Business Research Methods Volume 12 Issue 2 2014 ontology. In practical terms, the decision-making map operationalises an important tenet of constructivist research – that of social negotiation of meaning – by operating as an instrument for co-creation of understandings between researcher and participant, at the moment of data collection. It creates moments for discussion and it allows the recording of the concepts that participants chose as the best descriptors for their perceptions. In so doing, it reverses the typical accountability relations that develop in an interview encounter and it increases the plausibility of analytical theorisations that are not a monopoly of the researcher’s interpretive capabilities.
  • 29. Acknowledgement The authors would like to thank the Sheffield City Council for their participation in the study reported in this paper. References Bryman, A., 2012. Social research methods 4th ed., New York: Oxford University Press. Charmaz, K. 2006. Constructing grounded theory, London: Sage. Cohen, L. & Manion, L., 1994. Research methods in education 4th ed., London: Routledge. Collins, P., 1998. Negotiating Selves: Reflections on “Unstructured” Interviewing. Sociological Research Online, 3 (3). Available at: http://www.socresonline.org.uk/3/3/2.html. Creswell, J.W., 2003. Research design : qualitative, quantitative, and mixed methods approaches 2nd ed., CA: Sage: Thousand Oaks. Denzin, N.K. & Lincoln, Y.S. 2005. The Sage handbook of qualitative research, London: Sage. Gauld, R., 2007. Public sector information system project failures: Lessons from a New Zealand hospital organisation. Government information quarterly, 24 (1), pp.102–114. Hauser, J.R. & Koppelman, F.S., 1979. Alternative perceptual mapping techniques: relative accuracy and usefulness. Journal of marketing Research, 16 (4), pp.495–506. Hughes, J.A. & Sharrock, W.W., 2006. The philosophy of social research, London: Longman Pub Group. Kim, H. 1996. Perceptual mapping of attributes and preferences: an empirical examination of hotel F&B products in Korea.
  • 30. International Journal of Hospitality and Management, 15 (4), pp.373-391. Kholi, C.S. & Leuthesser, L. 1993. Product positioning: a comparison of perceptual mapping techniques. Journal of Product & Brand Management, 2 (4), pp.10-19. Kvale, S. & Brinkmann, S., 2009. InterViews: Learning the Craft of Qualitative Research Interviewing 2nd ed., Thousand Oaks, CA: Sage Publications, Incorporated. Lilien, G.L. & Rangaswamy, A. 2004. Marketing engineering: computer-assisted marketing analysis and planning, Victoria: Trafford Publishing. Mackenzie, N. & Knipe, S., 2006. Research dilemmas: Paradigms, methods and methodology. Issues In Educational Research, 16(2), pp.193–205. Available at: http://www.iier.org.au/iier16/mackenzie.html Maltz, A., Carter, J.R. & Maltz, E. 2011. How managers make sourcing decisions about low cost regions: Insights from perceptual mapping. Industrial Marketing Management, 40 (5), pp.796-804. May, T., 2003. Social Research: Issues, Methods and Research 3rd ed., Buckingham: Open University Press. McNeill, P. & Chapman, S., 2005. Research methods 3rd ed., New York: Routledge. Mills, J., Bonner, A. & Francis, K., 2006. The development of constructivist grounded theory. International Journal of Qualitative Methods, 5 (1),.pp 25-35 Pickard, A.J., 2007. Research methods in information, London: Facet publishing. Rapley, T.J. 2001. The art(fullness) of open-ended interviewing:
  • 31. some considerations on analysing interview. Qualitative Research, 1(3), pp.303-323. Reynolds T.J., Gutman J.,2001. Laddering Theory, Method, Analysis, and Interpretation. In T.J. Reynolds T.J. and J.C. Olson (eds), Understanding Consumer Decision Making –The Mean- End Approach to Marketing and Advertising Strategy. Hove, UK: Psychology Press, pp.25-62. United Nations, 2012. United Nations E-Government Survey 2012: E-Government for the People, New York. Available at: http://unpan1.un.org/intradoc/groups/public/documents/un/unpa n048065.pdf. Walsham, G., 1995. The Emergence of Interpretivism in IS Research. Information systems research, 6 (4), pp.376–394. Wen, C. & Yeh, W. 2010. Positioning of international air passenger carriers using multidimensional scaling and correspondence analysis. Transportation Journal, 49 (1), pp.7- 23. Yin, R.K., 2009. Case study research: Design and methods 4th ed., Thousand Oaks, CA: Sage Publications, Incorporated. www.ejbrm.com 94 ISSN 1477-7029 Reza Mojtahed et al Appendix 1 –
  • 32. Emergent themes representing participants’ views on the range of factors that impact the process of e- Government development and implementation decision-making in Sheffield City Council www.ejbrm.com 95 ©ACPIL Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. c.ADM_140187_20141101_00004_25705.pdf1. Introduction2. A description of the development of decision-making map2.1 What is the perceptual map?2.2 A qualitative design of decision-making map2.2.1 The processes of decision-making map3. A use of decision-making map in information system research3.1 Objectives of research project3.2 Application of decision-making map3.3 Challenges of using decision-making map4. ConclusionAcknowledgementReferencesAppendix 1 – The Qualitative Report 2016 Volume 21, Number 1, How To Article 2, 44-58 Translational Research Design: Collaborating with Stakeholders for Program Evaluation Kari Morris Carr
  • 33. Indiana University, Bloomington, Indiana, USA Jill S. Bradley-Levine Ball State University, Muncie, Indiana, USA In this article, the authors examine researcher collaboration with stakeholders in the context of a translational research approach used to evaluate an elementary school program. The authors share their experiences as evaluators of this particular program to demonstrate how collaboration with stakeholders evolved when a translational research approach was applied to program evaluation. Beginning with a review of literature regarding stakeholder participation in evaluation and other qualitative research, the article reflects on a method for conceptualizing participant involvement and collaboration within the translational framework. The relationship between researchers and stakeholders is articulated according to this method. We
  • 34. interpose these descriptions with their alignment to Petronio’s (2002, 2007) five types of practical validity for translational research. The paper ends with a consideration of what was learned throughout the evaluation process, including both successes and challenges, by means of the translational model. Keywords: Translational Research, Translational Validity, Participation in Program Evaluation, Collaborative Research The translational research design represents a researcher’s commitment to collaboration with participants, and addresses issues of ethics and advocacy that have been recognized in established descriptions of qualitative research (Creswell, 2007; Denzin & Lincoln, 2005; Fine, Weis, Weseen, & Wong, 2000; Garner, Raschka, & Sercombe, 2006; Lincoln, Lynham, & Guba, 2011; Korth, 2002; Smith & Helfenbein, 2009). Specifically, translational research represents an effort to translate findings into functional
  • 35. solutions for research partners and community members (Petronio, 2002). Yet the literature finds that translational efforts are neither easy nor occurring with great frequency (Maienschein, Sunderland, Ankeny, & Robert, 2008; Petronio, 1999). In recent accounts, scholars have located translational research within the fields of communications and medicine in which discoveries are driven (translated) toward practical applications (Hamos, 2006; Petronio, 2007). In our use of the term, both the process (method) and product (outcome) characterize important aspects of translational research, particularly among the individuals with whom the researchers collaborate: the local partners or stakeholders. The evaluation project described in this article is used to demonstrate how translational research and collaboration with stakeholders developed in the context of the evaluation of an educational program. It is our goal to represent the translational research processes by sharing actual experiences in collaborating with a specific evaluation partner. However, we do not present results from actual data concerning
  • 36. this evaluation. This article recounts the relationship we developed while working at a university-based education research center with the Catholic diocese of a large Midwestern city. The project involved the evaluation of an after-school program established to meet the educational needs of children attending low-performing and high-poverty Catholic schools. Though the initial partnership developed out of the diocese’s need for program evaluation, we identified this need 45 The Qualitative Report 2016 as an opportunity to forge a relationship with a community partner and to contribute to the existing body of research on after-school programs. The overall mission of the university research center was to use translational methods in all projects. In practice, the approach was two-fold. One facet consisted of the collaboration with community partners for their immediate research needs. The second included translation of research results back to the field and to the
  • 37. public. While traditional notions of research often focus on a linear process in which faculty researchers generate questions, conduct a study, and publish results, the translational process begins and ends with researcher and partner together at the table co-leading the inquiry process (Ortloff & Bradley-Levine, 2008; Petronio, 2002; Smith & Helfenbein, 2009). In the current case, the demand for university level research intersected with a community partner’s need for accountability and translated to products beneficial for the partner, its program, participants, the university, and academic community in general. The translational methods described here are much like a moving target. Indeed, forming a true partnership is not considered an end in itself, but rather an ongoing practice. Partners aimed to learn from the other throughout the research process and to better meet the needs of the community as a result. Our case is no exception. As such, we find it necessary to describe some history of the field of translational research. Next, we identify common understandings of stakeholder involvement within evaluation
  • 38. and qualitative research literature, but note that we prefer the term “partner” to “stakeholder” in order to draw attention to the intended horizontal relationship we are cultivating with the community. However, we will use the terms “partner” and “stakeholder” interchangeably given that the latter is more commonly used in the selected literature. Lastly, we outline the specific methods we utilized in the translational research process, drawing on research methodology across disciplines. These methods are by no means a “how to” list for translational research among community partners, but rather describe what evolved “at the table” when we came together with our research partner. Finally, while it is important to note that program evaluation is a large piece in the relationship between the research center we represented and the diocese, it is just one part of the translational relationship, and the emphasis of this article. The goal of forging opportunities for translational research is, indeed, to improve practice for community partners—through the
  • 39. work they need, but also through university research made public—and to overtly engage local stakeholders who are experts of their contexts in order to make university resources relevant and applicable to real community needs (Smith & Helfenbein, 2009). Our case is but one example, and in writing this article, the reflection process prompted us to further define what we mean by “translation.” Thus, the methods in translation described here served a dual goal: to aid community partners in meeting their need for evaluation/research, and to extend current notions of qualitative research for the purpose of bringing the needs of the community to the fore of scholarship (Petronio, 2002). Literature: Approaches to Translation Translational Research in Communications and Medicine Both communications and medical research scholars have a recent record of using translational research in their respective fields. Petronio (2007)
  • 40. and Maienschein et al. (2008) acknowledge the more recent and popular focus bestowed upon translational work through the National Institutes of Health (NIH) and their “Roadmap for Medical Research” issued in 2003, in which the U.S. federal government called for scientists to intensify their efforts to apply medical results more rapidly than in the past. However, as early as the mid-1990s, Petronio (2007) described a commitment to “translating research into practice” (p. 215). In other words, Kari Morris Carr and Jill S. Bradley-Levine 46 she advocated a way for communications scholars to establish methods of implementation that would be “geared toward developing functional practices to improve lives” (p. 215). There is a subtle difference between the two fields’ treatment of the word translation, though both involve the increase of efforts toward bringing scholarship and research to the clinical or community places where the application of new knowledge is most pressing.
  • 41. Woolf (2008) refers to these two types of translational work in the medical field as T1 and T2. T1 is identified as the “new methods for diagnosis, therapy, and prevention and their first testing in humans” as have been acquired from recent laboratory work (p. 211). T2, on the other hand, focuses on the intersection of the results from T1 with the “community and ambulatory care settings” which involves a whole host of unpredictable variables and disciplines that characterize work with “human behavior and organizational inertia” (pp. 211- 212). Simply put, T1 appears to be the actual drugs and treatments that emerge from the lab, while T2 refers to the ways in which the drugs and treatments are accessed by the patients and communities who need them. From a research perspective, T1 requires more quantitative approaches such as experimental design whereas T2 benefits from qualitative approaches because the goal of T2 is to answer questions of why and how communities and individuals use the innovations developed through T1 research. Moreover, what Petronio and
  • 42. communication scholars have been calling “translating scholarship/research into practice” for over a decade closely resembles Woolf’s T2. Petronio (2007) identified several types of translational validity which address the uncertainty of applying findings to practice and help further define their contribution to the field. These are “experience,” “responsive,” “relevance,” “cultural,” and “tolerance” validities (Petronio, 2007, p. 216). Each describes aspects and enactments of communication to which translational scholars must be attentive in achieving the goals of translation. More specifically, they explain the precise means for the researcher and the stakeholder’s partnership in the inquiry, and how these should proceed. The five types of validity not only offer “criteria for the admissibility of evidence” and ways to “align scholarship to the translational process” (Petronio, 2002, p. 511), but in our understanding they propose how stakeholders and researchers collaborate in research. Experience validity recognizes the lived experience of the research partners and
  • 43. subjects. Responsive validity obliges researchers to remain attentive to society’s changing needs. Relevance validity ensures that value is placed “on the issues important to target populations,” making certain that community needs come first when researchers are deciding which questions to explore in their work (Petronio, 2002, p. 510). Cultural validity respects both the ethnicities and customs of various cultural groups and ensures that these serve as a context for research translation. Lastly, tolerance validity upholds the iterative research process by recognizing “taken-for-granted phenomena that occur in everyday life and passing that understanding on to others” (p. 511). In essence, we observe a strong correlation between translational validity and qualitative research (Petronio, 2002). The five types of validity offer a way for qualitative researchers to define their ontological and epistemological views by means of the translational approach. Many qualitative approaches acknowledge the social negotiation of both the
  • 44. researcher’s and participants’ views of reality (Creswell, 2007). In this view, there is not one reality, but a mutual perspective in which researcher and participant (among others) collaborate to build and share their respective understandings of their lived experiences. Knowledge is likewise generated through iterative and negotiated processes within the shared research. Petronio’s five types of validity assist the researcher in calling attention to the many contexts and reasons for keeping collaboration and negotiation at the forefront of the research process. Within Petronio’s five types of validity, researchers selecting qualitative approaches can recognize ways to describe, evaluate, and substantiate their collaboration with stakeholders and 47 The Qualitative Report 2016 the community. They also aid the researcher in being attentive to ways in which collaboration ought to take place. Likewise, the five types of validity (in varying ways) highlight what we, through our
  • 45. partnership with the diocese, have sought out in meeting their needs based on their particular circumstances, practices, cultures, and overall lives that existed prior to our involvement, and persisted after we left the field. Experience, cultural, and tolerance validities are the most applicable to our case of program evaluation. Each represents the ways in which we continually negotiated the terrain of translational work in the evaluation of the after-school program through a deep contextual understanding of our partner’s lived experience and culture. Because the relationship with community members is so integral to translational work, we now turn to the literature’s treatment of stakeholder participation in evaluation and research to help address the issue of researcher and community relationships. Stakeholder Participation and Communication More common notions of partner involvement in the literature refer to degrees of stakeholder participation within evaluation and academic research. Taut (2008) reviewed
  • 46. several researchers’ conceptions of stakeholder involvement within evaluation research, in particular, and found that there was no conclusion regarding how many and to what degree stakeholders should be involved in research. Nonetheless she noted that all researchers believe they should be engaged to some extent. In a widely-cited article concerning types of collaborative evaluation, Cousins and Whitmore (1998) distinguished between two types of participatory research, which they term “Practical-Participatory Evaluation” (P-PE) and “Transformative-Participatory Evaluation” (T-PE). In P-PE, the evaluator leads most aspects of the evaluation along with the participants, while T-PE characterizes full stakeholder involvement (Cousins & Whitmore, 1998; Fitzpatrick, Sanders, & Worthen, 2010). O’Sullivan and D’Agostino (2002) applied Cousins and Whitmore’s framework and further explained that utilization of findings is an important consideration when debating the role of participants in evaluation. They find that although some participants believe that the
  • 47. evaluator should be the one who moves forward with the findings, most believe it is the involvement of stakeholders that will increase utilization of an evaluation (O’Sullivan & D’Agostino, 2002). They also found that participation can be loosely defined and must be treated with caution. Simply providing program data can be termed “participation,” but true collaboration moves beyond data provision to imply the “desired level of involvement” (Fitzpatrick, Sanders, & Worthen, 2010; O’Sullivan & D’Agostino, 2002, p. 373). Similarly, stakeholder involvement is often dependent on the desired outcomes of the study (Taut, 2008). If there is a social justice goal regarding the empowerment of participants, then it is often the case that every stakeholder is involved and the use of an evaluation’s results becomes diminished. However, if the utilization of findings is most pressing, the involvement of fewer participants is often perceived as more beneficial to the evaluation process (Taut, 2008). In either case, a belief in stakeholder contributions places varying conceptions of
  • 48. participation and the use of research outcomes at the center of defining what collaboration in evaluation means. We recognize the contribution of translational research for its consideration of participant/stakeholder contexts and study outcomes (Smith & Helfenbein, 2009) Some literature considers the many ways in which participants ought to be involved in research, both practically and ethically. These include roles in participatory types of inquiry, in challenging notions of hierarchy and power, and for the contributions they make to the research process (Fine, Weis, Weseen, & Wong, 2000; Garner, Raschka, & Sercombe, 2006). What translational research brings to bear on these levels of understanding for participant involvement is the idea of challenging current university practice (Smith & Helfenbein, 2009). Kari Morris Carr and Jill S. Bradley-Levine 48 What is confronted is the very formation of inquiry in the first place. Translational researchers use methods that seek to set community partners’ questions as the guiding force for new
  • 49. research, and emphasize the practice of collaboration and reciprocity to simultaneously meet the immediate needs of the community and university (Petronio, 2002). Taken together, the literature summarizes varying conceptions but lacks in making actual methods of stakeholder collaboration explicit (O’Sullivan & D’Agostino, 2002; Taut, 2008). The translational partnership described below sheds light on ways stakeholders and evaluators can work together in one type of qualitative research, both to increase participation on all sides and to illuminate a new method for carrying out university research and evaluation. Cunningham (2008) asserts that collaboration must foster participation in ways that “remove barriers between those who produce knowledge (researchers) and those who use it (practitioners)” (p. 375). Thus, we articulate understandings of participatory research and evaluation in the following table. Table 1. Summary of Collaborative Research/Evaluation Strategies and Elements of Inquiry
  • 50. Principal Investigator (PI)/Evaluator Role Stakeholder Involvement Goal of Inquiry Practical Participatory Evaluation Balanced leadership of inquiry with stakeholders, but ultimate decision-making with PI. Balanced involvement in the inquiry process, but ultimate decision- making with PI. PI and stakeholders
  • 51. together determine utilization of findings locally. Empowerment Evaluation PI is facilitator of the inquiry. Full involvement in the inquiry and decision-making process. Stakeholders determine utilization of findings with goal of empowerment. . Translational Research/Evaluation
  • 52. Co-leads inquiry with local stakeholders; Brings university resources to inform/support inquiry. Expert of evaluation/research process. Co-leads inquiry with PI; Expert of the local context. PI and stakeholders determine utilization, application, and publication of findings; Ensures that research outcomes directly improve stakeholders’ roles in
  • 53. the community and lives of the target population in addition to contributing to wider body of knowledge. Adapted in part from Cousins and Whitmore (1998) and Fitzpatrick, Sanders, and Worthen (2011). Common to all types of research and evaluation are the three elements: principal investigator (PI)/evaluator control, stakeholder involvement, and the goal of the inquiry. Each of the three types of research/evaluation summarized in the table highlights different views of the three elements. The principal investigator/evaluator controls all aspects of research, shares research decisions locally with stakeholders, or is a balance between both. Research involves all stakeholders in all aspects of research (e.g., transformative evaluation), or only a select few
  • 54. stakeholders in a small number of research decisions (e.g., some types of participatory evaluation). Lastly, the goal of the inquiry could be to forge a partnership with stakeholders within an organization (e.g., transformative evaluation), or for results to be fed back into the 49 The Qualitative Report 2016 local organization when the research is complete (e.g., participatory evaluation). Most important to our current work, however, are characteristics of the third type: translational research. Translational research maintains many of the aspects of the types above, but also acknowledges that both the evaluator and stakeholder are experts of their own contexts. It works toward bringing together the best of research and practice in order to further the goals of the community within the framework of university research such as in our case.1 In sum, stakeholders and the researcher both participate and contribute to the inquiry, and the results of research are to be applicable to the community organization
  • 55. and published in a manner that makes the findings practical and available to the wider academic and public community. Translational Methods Enacting Translational Research through Partnership The partnership between the research center and the diocese began in the spring of 2007 when the after-school program director approached the director of our center to discuss the diocese’s need for a more meaningful evaluation of their program. The center’s translational research model required that researchers “be invited into a position where [they] are able to describe (or retell) events, as well as the rationale for decisions from the organization’s point of view” (see Smith & Helfenbein, 2009). The diocese’s need and our expertise opened the door for a collaborative partnership. The diocese was then applying for grant renewal to fund their program and sought opportunities for on-going formative feedback that would impact
  • 56. program implementation and quality, and the potential for the program director to contribute to the evaluation design and process. Our first task was to create the evaluation plan for the diocese’s grant narrative. Pivotal to this task was the development of research questions which were crafted from the after-school program’s goals. Secondly, we sought approval to work with human subjects from our university’s institutional review board (IRB), which ensured our research provided the necessary documentation, safeguards, and transparency to assist in ensuring participants’ privacy and protection. Once the diocese reviewed and provided feedback to our evaluation plan and the IRB approved our protocol, the research team began the process of understanding the after-school program and how it fit into the program’s goals and mission (Fitzpatrick, Worthen, & Sanders, 2011), reflective of Petronio’s (2002, 2007) experience validity and cultural validity. As part of this team, the authors explored the diocesan website, reviewed curricular materials from the program and schools, and attended staff trainings as participant
  • 57. observers. These activities allowed us to “take into account the lived through experience of those we [were] trying to understand” (Petronio, 2002, p. 509). After the initial work in seeking to better understand the origin and mission of our community partner, the research team, led by one of the authors, entered the field and began in-depth observations of the program’s summer camp. During this time, it was essential that team members engaged with the staff to establish a “supportive, non- authoritarian relationship” in order to increase trust and get to know more about the program without being intrusive (Carspecken, 1996, p. 90). To accomplish this, the team often ate lunch with the staff during site visits to the camp, and we also made ourselves visible to the staff each day. This prolonged engagement, represented through the length of time we were in contact with the staff and students, as well as the number of hours we observed the program served to “heighten the researcher’s capacity to assume the insider’s perspective” (Carspecken, 1996, p. 141). It also represented validation to the program director that
  • 58. we were committed to the 1 University-based research may not always be the locus for the primary investigator, but it is noted that this was the original intent when Petronio (1999) wrote of translating “scholarship” into practice. University research is what we mean when we discuss our roles as researchers and evaluators within the university research center. Kari Morris Carr and Jill S. Bradley-Levine 50 project and willing to invest significant amounts of time and energy in order to “build trust, learn the culture, and check for misinformation” (Creswell, 2007, p. 207). The trust built during the initial months of the partnership led to what Smith and Helfenbein (2009) refer to as “shared decision-making /generating inquiry questions, which involve[d] a pushback against pure objectivity or self-proclaimed independence” (p. 93). In short, the collaborative process began as a result of early trust building and prolonged engagement, representing aspects of experience
  • 59. and cultural validity and the larger frame surrounding the participants’ experiences (Carspecken, 1996; Petronio, 2002). Collaborative Evaluation Design Because the research center was hired to evaluate the after- school program, questions regarding what the program wanted to know were decided upon in agreement with the program director and the research lead, a position in which both authors served. This aspect of the translational process most aptly reflects relevance validity as we desired to place value on the program’s needs and to use their knowledge and descriptions of the issues that were important to them (Petronio, 2002). The researchers saw the staff and partners located within the schools and the community as the authorities of their environments; as a result, we had the opportunity to collaboratively develop appropriate methods in order to answer the most vital questions driven by program needs. Working in concert, the research lead and the program director
  • 60. adopted a modified version of the Extended-Term Mixed-Method Evaluation (ETMM) design (Chatterji, 2005, including the following components: a long-term time-line; an evaluation guided by the program’s purposes; a deliberate incorporation of formative, summative, and follow-up data collection and analysis; and rigorous quantitative and qualitative evidence. This method of analysis was preferred by the directors and researchers at our university research center for its deliberately flexible, yet specific, methodology that permitted transformation over time, in response to program changes and growth. The ETMM design also enabled the team to effectively combine formative and summative data points within the appropriate timelines. For example, formative data reporting was more useful to program staff mid-way through the academic year and in our informal monthly meetings, whereas summative information concerning student data (i.e., program attendance and analysis of standardized test scores) was valuable at the year’s end for both state and local reporting. The
  • 61. key data points included observations, interviews, focus group discussions, surveys, and student-level data including test scores, grades, and attendance records. Although the research lead usually directed the initial development of protocols and surveys, these instruments were shared at various points of development with the program director, which afforded opportunities for her to include questions she needed or wanted to ask. Additionally, because we could not “presume we [knew] what [was] best for [our community partners] or how to best address their… needs,” program effectiveness and implementation questions changed with each year of the grant, and we met regularly with the program director to ensure that the research and evaluation were meeting the concerns of each grant year (Petronio, 2002, p. 510). The selection of the ETMM design for program evaluation likewise supported this type of flexibility (Chatterji, 2005). Participatory Observations
  • 62. Petronio (2002) found that qualitative methods are often more conducive to the aims of the five types of translational validity. The use of qualitative participant observations in our research privileged both the experiences and culture of the participants and the surrounding organizations within the diocese’s after-school program. After the summer camp came to an 51 The Qualitative Report 2016 end, researchers made plans to begin evaluating the after-school programs held in seven sites serving over 700 students for the academic year. Because the evaluation of the after-school program was a much larger undertaking than what was offered during the summer, the research team began site visits by watching from a distance, careful to observe each program component, and student and staff interaction in their natural settings. However, after a short time, we returned to the participant observer paradigm in order to help build trust with participants, as well as to yield a participant’s perspective of the program (Creswell, 2008;
  • 63. Petronio 2002, 2007). We began offering our assistance to students during the time allocated for homework help, which built rapport with the students while offering an extra set of hands to reduce the staff’s workload. Working with the students on homework also gave us opportunities to talk to participants in order to discover important insights regarding their experiences. As participant observers we were able to build credibility with the program staff, who noticed that members of the research team were fellow educators and/or parents. As a result, they welcomed us more readily into their buildings, which helped the research proceed more efficiently. We visited each of the schools where the after- school program took place between four and eight times each semester during each school year. The research team also utilized interviews and focus group discussions, which probed the “layered subjectivity” of participants, allowing them to discover and revise their initial thoughts and emotions through each stage of the research (Carspecken, 1996, p. 75). Our
  • 64. familiarity with the program and the trust we built with participants including staff, students, and parents, during extensive observations permitted them to give, what we believed to be, candid responses to interview and focus group prompts. For example, given the option to turn off the recorder so that a critical remark would be “off the record,” many participants chose to leave the recorder on, showing that they trusted we would not only maintain their confidentiality, but that we understood the context of their comments. We found that staff members were more likely to share complaints with us when they knew that the information would be passed to the program director anonymously. This represents an important ethical consideration central to translational methodology in which we attempted to “place greater value on the issues that [were] important for [the] target population” (Petronio, 2002, p. 510). These honest exchanges enabled the diocese’s program director to offer assistance and problem-solve with the after-school staff throughout the year.
  • 65. The trust in our research team that program staff developed during the evaluation supported our efforts to conduct balanced focus group discussions with parents as well. Although staff members were responsible for recruiting parents to participate in the discussions and we might have expected that they would invite only those parents who were pleased with the program, we rarely held a discussion with a group of parents who made only positive contributions. Rather, staff wanted to hear the constructive feedback from parents they knew were not perfectly satisfied, and they believed that we would utilize this data to help them improve the program. In addition to the qualitative data collection discussed above, the research team and program director co-designed staff, student, and parent surveys to assure that as many stakeholders as possible were given the opportunity to share their perceptions of the program, highlighting our commitment to the ideal that the research serve a relevant purpose for all populations involved (Petronio, 2002). Surveys were
  • 66. administered during the fall and spring of each academic year. Before each administration period, members of the research team and the program director collaborated in a review of the surveys to determine whether revisions to questions needed to be made or new topics of interest should be probed. Program staff usually administered surveys, which were available online and on paper. Parent surveys were also translated into Spanish by a staff member. Kari Morris Carr and Jill S. Bradley-Levine 52 Ongoing Formative Feedback Because data collection occurred almost continually throughout the length of the multi- year grant period, formative feedback was both expected and needed by the program director and staff. The research team utilized the constant comparative analysis model (Glaser & Strauss, 1967), which allowed us to engage in continual analysis whereby themes emerged,
  • 67. developed, and changed. Several months of data collection, usually over a naturally occurring time frame such as a semester or summer vacation were followed by short, but intensive analysis. Emerging themes were reported to the program director and staff via formative feedback reports. These served as member checks because the director and staff were invited, and even expected, to offer their perspectives on the findings. Reports typically went through at least two rounds of revisions as a result of these member checks. The diversity of the research team facilitated the constant comparative analysis process and helped address issues of cultural validity through our appreciation of the local ethnicities, customs, and routines of the after-school program, staff, and students (Petronio, 2002). As mentioned previously, a number of team members were former teachers with experience and therefore, expertise working with students in the grade levels that the program served. However, the diverse backgrounds of other team members also contributed to the overall team
  • 68. perspective. For example, a social work major was also a graduate of one of the schools within the program; she was able to provide a community perspective to our analysis. Another team member was an international student who offered a more global analytic perspective. Also, because of her outgoing and kind personality she was admired by the children in the program. Other team members included psychology majors, higher education graduate students, and sociology majors. The diversity present in the research team facilitated internal debate and perspective taking that we believe would not have occurred within a homogeneous team, and which facilitated the translational research process from partner development and evaluation design through data collection, analysis, and cultural awareness. From the start of this project, we explicitly strove to keep lines of communication open and transparent. To this end, we made our analysis process as understandable as possible by including the program director in various analysis sessions, which provided another
  • 69. opportunity for member checking and for disclosing both ours and our partners’ biases and values (Petronio, 2002). This sharing allowed us to be clear about the ways the evaluation unfolded and to make the research process accessible to members of the after-school program staff. However, this open communication was complicated at times. For example, at various points during our partnership we were asked to share confidential information such as identifying a staff member who we observed doing something that the program director found unproductive. At these moments, we had to find ways to balance our commitment to preserve confidentiality with the program director’s need for impartial information. But it was at these instances of tension that we believe the trust we had built through our partnership allowed us to engage in conversations where we shared, and learned from, our different perspectives. Another form of member checking occurred as a result of our regular communication with staff at each site. Our bi-monthly visits allowed us to serve as a vehicle for facilitating
  • 70. interaction among the sites as well as checking our findings. We often shared successes that we observed with sites that were struggling or looking for new ideas, while staff provided us with information about the students, schools, and communities they served. In these ways, our exchange resulted in greater understanding of the context for the research team and increased knowledge sharing (Petronio, 2002) among the sites through our informal reports and continual communication. 53 The Qualitative Report 2016 Learning from Translation Our experience with translational research has positioned us toward demonstrating that “shared ownership of the research process present[ed] conditions for empowerment and create[d] a dynamic exchange of ideas about how to best implement and study an intervention/program” (Smith & Helfenbein, 2009). We say
  • 71. “positioned” because translational research represented an ideal in some respects. Yet it is a type of research within which we find worth and value. Still a moving target, our understanding of translational evaluation and research resonated with Petronio’s (2007) notion of naming this kind of research a “challenge” (p. 216). Her five types of practical validity for translational work provided us with an explicit framework for facilitating stakeholder participation in our research. Because we sought to understand our partner’s lived experience throughout the evaluation process, we achieved some aspects of shared knowledge, and also came up against some difficulties. While in the field as participant observers, for example, we made efforts to build positive relationships with our participants, which helped us transcend certain difficulties. Highlighting Petronio’s (2002, 2007) experience validity, our data collection was fostered within the context of the program’s current practice. And although our proximity to the site staff “as they enacted [their work]” permitted us access to the lived experience of the
  • 72. after-school program, we might have been lacking in other types of Petronio’s translational validity because we did face some challenges in “transform[ing] findings into meaningful outcomes” (p. 216). However, because of our attention to the experience and practice of our partners, we felt that our shared trust facilitated tackling issues that were difficult or uncomfortable for either the program staff or the research team members. An illustration of this challenge is depicted below. At one site, it seemed as though the more research team members shared data with staff members, the more strained our relationship became. The site director and program staff began to view us more as “external evaluators” than as partners and were less likely to respond positively to our presence at their sites. In addition, shortly after our mid-year reports were disseminated, we had a sense that the site director or program staff members were scrambling to show us “what the evaluators want to see” rather than a typical program day. The site director
  • 73. and staff were also sometimes concerned because we came on the “wrong day” and were not going to see their program at its “best.” To alleviate these tensions, we continually reassured staff that we were seeing many positive things happening at their site. We would often name specific strengths of their program or remind them that during previous visits we had seen many positive elements. When faced with areas in need improvement, we shared ideas that we had seen implemented at other sites that might help them improve. In addition, we started to ask upon arrival whether there were particular activities that the site director wanted us to see that day. This allowed the site director and staff to show us their best and helped put them at ease concerning whether we would see what they had hoped. For her part, the site director became much more direct about telling us what we missed last week or yesterday, and began to share stories about program elements of which she felt proud. Other site directors also shared their concerns with the program director, who was able to communicate some of these to us on their
  • 74. behalf. The nature of our ongoing communication with the program director and site directors gave us many opportunities to directly address the tensions, and work toward finding realistic and empowering solutions as quickly as possible. It also enabled us to become more responsive in the way we communicated with the after-school program staff as a whole “to be receptive to human conditions” and sensitive to the manner in which our communication affected staff behavior (Petronio, 2002, p. 510). The above tensions reflect one challenge in attempting to involve all staff members relative to the utilization of research and evaluation findings. Cousins and Whitmore’s (1998) Kari Morris Carr and Jill S. Bradley-Levine 54 delineation between practical-participatory and transformative- participatory evaluation applies to our difficulties in that not all program staff were entirely enmeshed in the present evaluation. The diocese’s program director and each of the seven site directors for the after-school
  • 75. programs were our main contacts for collaboration. Site staff members were involved on a more cursory basis, and usually in response to the program director’s request for assistance in the evaluation. In accord with O’Sullivan and D’Agostino’s (2002) description, site staff members were “participants,” but recall this term is often used loosely. Merely permitting us access to the program at their respective sites, site staff were participating. In seeking to understand why some of our findings were received with tension by site staff, we considered again the five types of translational validity as described by Petronio (2002, 2007). In addition to the need to address the limited participation of site staff, Petronio’s tolerance validity points out our probable deficiency in “honoring existing patterns when [we] bring research into practice” (p. 216). With our main communication residing with the overall program director, our findings were not well received on occasion because they passed through the program director first before proceeding to the site directors. Had we better addressed
  • 76. tolerance validity, we would have been more cautious and cognizant of the intersection between the evaluation results and the sites where the research took place. This junction of communication must be a place where we, as translators of research, position ourselves and the research to be more collaboratively interpreted and presented. In hindsight, we should have offered a work session where site directors and staff were invited to view the research and discuss findings and implications with the research team before creating a collaborative report. Another significant characteristic of the research to which we had been attentive concerned the hierarchical relationships between the program director, site directors, and staff. Though we, as the research team, fit somewhere between the program director and site directors, we constantly found ourselves searching for ways to “work the hyphen” in our researcher-participant relationships (Fine, Weis, Weseen, & Wong, 2000, p. 108). We cast the positivist notion of “objective expert” aside in favor of adopting an approach of solidarity in
  • 77. which we hoped to have “[undergone] an important shift, from that of an outside appraiser to that of a collaborator” (Cunningham, 2008, p. 375). In sum, we hoped to truly collaborate with our partner. Yet, as explored in this article, this is an aspect of our translational process that experienced both success and tension. Our frequent site visits and the participant observation paradigm we followed facilitated our mutual respect in the field. However, because the diocese’s program director led the collaboration efforts with the research team leaders, the researchers’ relationship with site staff appeared unbalanced at times (though most site visits proceeded smoothly). Additionally, both authors are former educators in schools similar to the ones served by the after-school program, and our own backgrounds likely influenced our interactions with the sites and their staff, such as in recommending program changes based on our prior experiences. However, our goal as translators of research into practice compels us to discover more appropriate methods for collaborating with all staff. As we move forth, we must
  • 78. echo Petronio’s (2002) call for increased communication in order to apply “new ways of conceptualizing a problem [and] make our work more accessible to the people who are not in academia” (p. 511). In this way, we will be able to truly understand the context in which staff members interact not only with our findings, but also with us as partners in the research process. Limitations There were some notable limitations to the translational research approach in our evaluation study. Aside from the challenges noted above in “learning from translation,” several limitations existed due to the fact that as researchers for a university center, we had been hired to complete a specific program evaluation for the seven school- based, after-school programs. 55 The Qualitative Report 2016 Because our employment at the research center depended on the funding generated from the program evaluation, we were limited in some respects by the
  • 79. evaluation requirements. Additionally, some after-school site staff members hesitated to participate in the evaluation beyond the provision of data; most after-school staff members worked other jobs and were paid little (Halpern, 2003) Thus, we understood their trepidation when they declined to invest more time in a collaborative research project beyond their current capacities as after-school staff members. Most of our collaboration took place with the after- school program director who was our point person for the evaluation contract. In retrospect, we would have valued building autonomy and leadership from the ground level up with each after-school site staff member, but this would include altering (somewhat radically) the job descriptions of these individuals. A final limitation concerns our desire to work more intentionally in the results and implementation phase of our research, something which our evaluation proposal did not fully encompass at the academic year’s end. In order to truly work toward the translational research ideal, our results must press toward practicality, functionality,
  • 80. and program quality improvement (Petronio, 2002). This may include redefining some traditional evaluator functions in the future (i.e., extensive data analyses and summative reporting) in favor of participating in collaborative quality improvement teams that work more closely with community partners within formative data collection and application paradigms (M.H. King, personal communication, May 28, 2013). Implications and Conclusion The collaborative research processes that we utilized through the enactment of translational research are relevant and important for all qualitative researchers. In writing this article, we set about demonstrating how collaboration with stakeholders during the research process can contribute to authentically translational outcomes. In our case, the program director, site directors, staff members, students, and parents participated at various levels in the design, data collection, and analysis processes. As a result, we
  • 81. saw findings and recommendations acted upon despite various imperfections in the process. Our close communication with the program director and site directors assisted in ensuring that the context for collaboration and translation was in position. Throughout the data collection, analysis, and reporting procedures, we approximated the true partnership both we and the diocese desired. The second piece of our translational research endeavor consisted of the practical application and dissemination of findings. In addition to informal meetings and formative feedback throughout the academic year, this article itself is another instance of our commitment to advancing research methodology within the wider community. Petronio’s five types of validity address how we consider translational researchers should engage with partners and work to translate findings into practice. They draw attention to the experiences, history, customs, values, and existing patterns of participants within both translational processes and products. Also important was studying the relationships within the
  • 82. process of implementing the translational product. How we presented our evaluation report to after-school staff members, for example, was no less important than the evaluation work itself. Care for the people and places with whom we work, and care for those who will use our findings is necessary for translation to occur. Table 1 fails to provide a description of the products of various research models, or to demonstrate whether an outcome or product is important at all. This area requires further research. Translational research highlights the process of the partnership, but also points toward a product and the means for putting that product into practice. The other cells in the table do not make products of the research explicit, and if they do, such as when Taut (2008) described the usefulness of evaluation, the Kari Morris Carr and Jill S. Bradley-Levine 56 partnerships among researchers and stakeholders were given less importance in an effort to come up with a practical product.
  • 83. Figure 1 below highlights what we have discovered to be integral components to our translational research work. The first concerns the relocation of university research into community spaces, and the concern for the eventual translation of findings into practical solutions for community partners. The application of findings concerns both the local context and also the larger academic community. The second important feature involves the continuous reflection of translational methods in terms of Petronio’s five types of translational validity. Lastly and perhaps most importantly, is the notion of community partnership, and approaching this partnership in a collaborative manner. Through the ongoing collaborative partnership, the researcher(s) and community members take advantage of each other’s knowledge and resources in the co-construction of research questions and within the research process itself. Figure 1. Features of a Translational Research Model
  • 84. Finally, Petronio’s (2002) discussion of objectivity within translational research illustrates that our work is not value-free; however, we must be willing to examine how our own values and subjectivities overlap with those of our research partners. Here, “if we want to work toward scholarship translation, we have to be clear on the way the values of those being researched and the researcher’s values intersect” (Petronio, p. 511). This moves us beyond just “not interfering” (Petronio, p. 511) with the customs of our stakeholders. In this way, we find translational research challenging at best; yet our struggles do not preclude or outweigh that we also find it to be the most ethical and rewarding manner to approach our work. We are working with relationships that are tenable and evolving, and despite our best efforts to be full collaborators, tensions and imbalances are an inevitable aspect of the process that we must acknowledge and value. Furthermore, what we do have is the understanding that the relationship in which we participate is ongoing, is not an end in itself, and through the trust and
  • 85. communication we have built, we have hope that the process will continue into the future for the good of the partnership, the education programs served, and the community. Collaborative Translational Methodology Practical application of findings University research made public Five types of Translational Validity Experience Responsive Relevance Cultural Tolerance Ongoing Collaborative Community Partnership Co-constructed
  • 86. research questions Researchers and participants co-lead the research process 57 The Qualitative Report 2016 References Carspecken, P. F. (1996). Critical ethnography in educational research: A theoretical and practical guide. New York, NY: Routledge. Chatterji, M. (2005). Evidence on “what works”: An argument for extended-term mixed method (ETMM) evaluation designs. Educational Researcher, 34, 14-24. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Program Evaluation, 80, 5-23. Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five traditions (2nd ed.). Thousand Oaks, CA: Sage. Creswell, J. W. (2008). Educational research: Planning,
  • 87. conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson. Cunningham, W. S. (2008). Voices from the field: Practitioner reactions to collaborative research. Action Research, 6, 373–390. Denzin, N. K., & Lincoln, Y. S. (Eds.). (2005). The SAGE handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage. Fine, M., Weis, L., Weseen, S., & Wong, L. (2000). Qualitative research, representations, and social responsibilities. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp.107-131). Thousand Oaks, CA: Sage. Fitzpatrick, J. L, Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines. Upper Saddle River, NJ: Pearson Education, Inc. Garner, M., Raschka, C., & Sercombe, P. (2006). Sociolinguistic minorities, research, and social relationships. Journal of Multilingual and Multicultural Development, 27(1), 61- 78.