Pols 601 Final
Jordan Chapman
December 2015
Page ! of !1 12
QUESTION #1
Political science is arguably in it’s infancy, especially when compared to fields such as
physics and mathematics. Eratosthenes accurately estimated the circumference of the earth
based on an experiment he conducted around 240 BC, reaffirming theory that claimed the earth
was, in fact, round. Theory is vital for the advancement of any scientific field. However,
political scientists are often left grasping for straws, unable to find well established theory in
their sub-field to anchor and guide their research. This essay will begin by exploring challenges
brought on by the youthfulness of the field, notably issues of construct creation,
operationalization, and discerning between systematic and nonsystematic variation in data sets.
Finally, hope will be restored, as a survey of techniques increasing the leverage of the theory that
is available will be provided.
Before data collection can ever begin, a thorough and clear understanding of the concepts
of interest is necessary. Such understanding is usually derived from theory. Unfortunately, in
young sciences, even constituted definitions of concepts are often up for debate. For example,
what the word elite means and who the category includes is largely debated. It is an extremely
important concept, but tragically, “When it comes to making the term precise, and formulating
some agreed procedures whereby it might be given concrete value,” it’s vagueness causes
problems (Moyser and Wagstaff). This problem plagues political science, more so than older
sciences, because many concepts are still being discovered and, “Consensus about prototypical
structure construct features is as much the exception as the rule,” leaving many constructs
incomparable across different studies (Shadish Et. al). When determining what is included in a
Page ! of !2 12
construct category, the most important features of the category, referred to here as prototypical,
are used. But what these criteria are is largely unclear when theory doesn’t provide guidance. In
addition, in many instances the use of latent variables for concepts is preferable, as they provide
a better abstraction of the concept (Morrison Et. al). Again, what components latent variables
ought to be broken up into is a question that is answered with theory.
Once constructs are fleshed out, theory is still needed to decide how to operationalize
them. Measurement can be thought of as a set of rules that guide the assignment of values to
subjects (Kerlinger et. al). In an extreme example, a study claimed a ninety-eight rate of
attendance for school attendance in an Indian state (King Et. al). Such a high margin is
suspicious in any context, let alone a state where low levels of commitment to education was
expected. Upon further inspection of the data, it was revealed that the measurement of
attendance was only based on the first day of class, when the student first entered school,
inflating a measurement of one day to represent seven years. What the rule of measurement
should not be, in this example, appears trite. However, rules for measurement are difficult to
develop when established theory is absent. Coding, an extremely common method of assigning
quantitative values, has an enormous opportunity for misuse without theory to guide the
researcher. The reliability of coding hinges on, “Expert judgement,” which often, “Inhibits inter-
subjective agreement,” (Morrison Et. al).
After data is collected, scholars are then left with the task of making sense of the
observations, and hopefully, developing substantive analysis. A major goal of research is, to
Page ! of !3 12
reference the title of Nate Silver’s book, distinguish the signal and the noise. Inference is done
with the primary purpose of distinguishing systematic variation from nonsystematic variation
(King Et. al). Unsurprisingly, a lack of established theory murks things up again.
This problem is exacerbated by the field’s bent toward non-expiremental research.
Kerlinger and Lee warn when, “Conducted without hypotheses, without predictions, [non-
experimental] research in which data are just collected and then interpreted, is even more
dangerous in its power to mislead,” than data observed in a laboratory setting. Access to large
data sets and computing power has exploded, birthing a bread of social scientist who search for
gold by aimlessly pandering the entirety of the Pacific Ocean. They often dump huge data sets
into models that Achen calls models, “Garbage-Can Probits,” vilifying their scientific value.
Instead of attempting to verify (or disprove) a theory, this method of pursuing knowledge leads
to, “Post factum explanations [that]… are so flexible,” they are impossible to nullify. Their
explanatory power is limited to the narrow context of the data set used and cannot be
generalized. This causes issues from determining causality. Without substantial theory, one
cannot make an argument for what is systematic and what is not. A case can be made to reject
many causal claims in political science on this basis. Experimentation is rare, and having a
perfect sample is, for all intents and purposes, impossible. Several techniques for increasing
confidence in causal claims exist, but they all largely rely on theory. Pair matching is one
example that involves comparing like cases, observing what characteristics are different, and
drawing inferences on some system. Comparative politics, as a field, conducts many qualitative
Page ! of !4 12
research projects in this manner. Knowing what cases to match is only possible when theory
provides arguments for what variables are explanatory and which are likely irrelevant.
The state of theory, and subsequently research, may now seem bleak. A little ingenuity
brought our ancestors fire, so surely political scientists can find some light of their own.
Exploratory research efforts and strategic use of present theory may help expedite progress.
When theory is practically non-existent, some exploratory research can be of use. This
type of research should not be used, and certainly should not take on the form of a garbage can
mode. Instead, scholars can conduct more open ended research to formulate subsequent research
questions. Fenno refer’s to this type of work as, “Soaking and poking,” and involves hanging
about around activities of interest. Such a tactic could take on the form of an open-ended survey
or interview. Interviews are especially useful, as they produce in depth answers and allow the
researcher to steer the conversation in unforeseen directions in response to the subject. The data
collected may not be suitable for quantitative analysis, and generalizations to the entire sample
may be impossible due to sample size and non-comparability of data, but it can put scholars in
the right direction for developing their theory.
On the issue of construct validity, a few recommendations can be made. First, it may be
wise to use several different operationalizations of constructs to cross-validate results (Shadish
Et. al). Secondly, providing clear descriptions of the constructs used and both their constituted
and operational definitions along with the rules of measurement to give make the data of better
Page ! of !5 12
use to future scholarly works is important. This provides an opportunity for scholars, as much as
frustration. In areas where theory is ill developed, there is room to review the previous literature,
systemize existing theory, and propose robust definitions and methods of measurement.
To increase the leverage of the theory available, developing testable hypotheses based on
implications of the theory is incredibly useful (King Et. al). Replication in all sciences it touted
as a virtue, and should not be limited to the same studies using the same settings. It may also
include testing implications (Kerlinger and Lee). In some cases, the same data sets may be of
use!
Despite it’s young age, progress has in political science is apparent. Where other
disciplines have a bedrock of established theory on which to build, political science’s foundation
is still being formed. This presents several problems for developing methods of data collection
and analysis, notably when specifying constructs, making quantitative measurements, and
identifying spuriousness. The most promising way to maximize the potential use of existing
theory is to test it’s implications. Moving forward, it is critical scholars take time to explicitly
and comprehensively outline the theories guiding their research so that others may build upon
them. Fields of science see the most advancement when research is done in a systematic manner,
preventing scholars from wasting resources on redundant questions. Theory is the key for
advancement.
Page ! of !6 12
QUESTION #2
Data collection can be a daunting task. Elite interviewing in particular is extremely
resource heavy as interviews may take hours and require expenses for travel. The rewards of this
method, for many, outweigh the costs. Studying elites allows gives scholars an opportunity to
focus, “On some fundamental questions about the way society ought to be organized, and the
roles of the individuals who comprise it,” (Moyser and Wagstaff, Page 2). Surveys and
aggregate data collection may be cheaper, but typically are not useful for learning about the
personal policy preferences of elites (S&B, Page 61).
A lack of representative sampling and rapport are two main challenges facing a student of
elites wishing to advance theory. This essay will discuss each in turn, first discussing how they
present difficulties for theory testing, and second discussing strategies for overcoming them.
THE SAMPLE PROBLEM
Theory testing relies on the ability to generalize observed data to the population at large.
In many cases, this involves testing a model with various statistical techniques, which nearly all
assume the sample observed is representative of the population of interest. Usually this
assumption is met by taking a random sample, but this very likely is not the case when elite
interviewing is used for two primary reasons. First, interviews are much more resource intensive
than other methods such as aggregate sampling and surveying. This problem has no solution, per
se. Instead, it is a matter of balancing depth and breadth and making choices. There is some
temptation to choose breadth. Berry points out that, “In elite interviewing the error term is
Page ! of !7 12
largely hidden to those outside the project while the number of cases, the “n,” is there for all to
judge,” (Page 680). A major strength of the interviewing method is it’s ability to uncover more
nuanced and lengthy responses. Semi-structured interviews seem to be the sweet spot for
researchers, providing enough structure to collect comparable data without sacrificing too much
quality. Second, it may be difficult to gain access to elites. Fortunately, some scholars have
found ways to find success.
The issue of access has increased over the years because, “The time demands on
members have increased and because there are more scholars seeking interviews,” than ever
before (S&B, Page 63). Fenno postulates that some political elites may decline research requests
due to a mistrust of academia, but Sinclair and Brady find this is certainly not the norm (Fenno,
Page 63). They also observe that Congressmen are exceptionally easier to access than key
decision makers in other governments, as well as the Senate, and provide a unique opportunity
for political scientists.
When connections exist, they greatly aid in gaining access (Fenno, Page 61). This
has potential to create a biased sample as the cases selected have something in common the
general population does not, a relationship with the research team. Sending form letters to the
sample and following up with a phone call is a common strategy. The letters should briefly state
an interest in studying the subject, but should not be specific (Aberach Et. al, Page 12).
Persistence is key, as staffers respond well to exhibitions of serious intent and willingness (S&B,
Page 64).
Page ! of !8 12
Interviewing staff, who are generally much more accessible, can provide an alternative to
interviewing the elite directly (S&B, Page 64). On average, Aberach found it took 3.97 contacts
to arrange an interview with a congressman, but only 1.52 contacts to get an interview with their
office. In addition, the definition of “elite” is debated amongst some scholars, and it could be
argued that staff themselves are part of the elite as they play a central role in the activities of
politics (M&W, page 4). Either way, developing positive and professional relationships with
staffers is essential to accessing information either directly or by setting up appointments.
RAPPORT
Building rapport presents another major challenge for interviewers. Without it,
researchers risk gathering data that is not representative of the real world, and consequently
unreliable. Making generalizations, and subsequently theory, when the reliability of data is
questionable can lead to misleading results. Some subjects may inflate their role in the political
process and speak in hyperbole (Berry, Page 680). Others, often called, “Defensive elites,” may
omit information and, “Provide at best questionable evidence,” (Moyser and Wagstaff, Page 18).
Aberach Et. al argue that data collection in elite interviews, especially open-ended ones, rest
greatly upon the, “Receptivity of the respondents,” (Page 6).
A variety of solutions is available. However, it is impossible to ever know how complete
a picture respondents will provide. One simple way to detect if a respondent is being untruthful
is to use multiple sources (Berry, 680). When available, data gleaned from interviews can be
Page ! of !9 12
cross-validated with known values for variables. Unfortunately, this is not possible for most data
gathered by the interview methodology. Comparing interviews with one another, and even
documents, may at least provide partial verification (S&B, Page 68).
In order to minimize these issues, building rapport is essential. Practical advice on how
to gain the trust of the subject is abundant. Several authors suggest starting the relationship with
the subject by emphasizing their role as a scholar. Fenno starts every first encounter by
presenting himself as a, “serious scholar,” who will not, “kiss and tell,” (Page 64). Sinclair and
Brady stress that it is important the subject understands, “You are…not a reporter…and that his
words will not appear in the newspaper the next morning,” (Page 65).
Conducting interviews on a, “Not-for-attribution basis,” may help ease the interviews
anxieties. Scholars often make an agreement to not attribute quotes to any individual respondent
(S&B, Page 65). Omission of any identifiable qualities may reduce the usefulness of the data
set, and such a cost is important to consider. Question order can help build rapport as well.
Leech recommends asking relatively easy, non-biographical questions first (Page 666). Personal
and sensitive questions should be held until the middle or end of the interview. Restating the
respondents answers in their own language may also help indicate the interviewer is listening and
provides an opportunity for the respondent to make clarifications. Presuming questions, used
sparingly, may also help ease a respondent into answering sensitive questions (Leech, Page 666).
Asking, “How much did did your organization give in soft money donations,” instead of, “Did
Page ! of !10 12
you give soft money,” implies that soft money donations are normal and reduces the stigma that
may lead to a less accurate answer.
Fenno notes that rapport is greatly aided through time and loyalty. Time and opportunity
to display loyalty likely will not arise for scholars conducting more traditional interviews.
Fenno gathered data using the observational participant method for data collection. This method
shows the most promise for building report but comes with some major drawbacks. It is
extremely costly which limits the sample size, making it difficult to draw generalizations to
further theory. The method also produces data that is largely non-quantifiable (Fenno, Page 90).
The observational participant method is an extreme case that makes the need for a balance
between rapport and quality data glaring. There is also a higher risk of the researcher, “Going
native,” by coming too close to the situation to be academically objective.
The implications rapport has on theory testing depend greatly on the research question at
hand. If the scholar is interested in discovering an objective truth, bias and omitted data will
hinder one’s ability to test a hypothesis. However, if a scholar is interested in the behavior of
elites or in understanding their point of view, this problem is less relevant. Berry also argues that
if the research question is narrow in scope, one botched interview will not have as much of an
impact on the results (Page 680).
Gathering a representative sample and extracting reliable information make the elite
interviewing method of data collection challenging, and these are far from the only concerns.
Comparability across cases is highly questionable and also restricts the ability to further theory.
Page ! of !11 12
Despite these difficulties, it is difficult to find any other method that has provides as much
insight on the decision making processes political leader go through.
Page ! of !12 12

601 Final Portfolio

  • 1.
    Pols 601 Final JordanChapman December 2015 Page ! of !1 12
  • 2.
    QUESTION #1 Political scienceis arguably in it’s infancy, especially when compared to fields such as physics and mathematics. Eratosthenes accurately estimated the circumference of the earth based on an experiment he conducted around 240 BC, reaffirming theory that claimed the earth was, in fact, round. Theory is vital for the advancement of any scientific field. However, political scientists are often left grasping for straws, unable to find well established theory in their sub-field to anchor and guide their research. This essay will begin by exploring challenges brought on by the youthfulness of the field, notably issues of construct creation, operationalization, and discerning between systematic and nonsystematic variation in data sets. Finally, hope will be restored, as a survey of techniques increasing the leverage of the theory that is available will be provided. Before data collection can ever begin, a thorough and clear understanding of the concepts of interest is necessary. Such understanding is usually derived from theory. Unfortunately, in young sciences, even constituted definitions of concepts are often up for debate. For example, what the word elite means and who the category includes is largely debated. It is an extremely important concept, but tragically, “When it comes to making the term precise, and formulating some agreed procedures whereby it might be given concrete value,” it’s vagueness causes problems (Moyser and Wagstaff). This problem plagues political science, more so than older sciences, because many concepts are still being discovered and, “Consensus about prototypical structure construct features is as much the exception as the rule,” leaving many constructs incomparable across different studies (Shadish Et. al). When determining what is included in a Page ! of !2 12
  • 3.
    construct category, themost important features of the category, referred to here as prototypical, are used. But what these criteria are is largely unclear when theory doesn’t provide guidance. In addition, in many instances the use of latent variables for concepts is preferable, as they provide a better abstraction of the concept (Morrison Et. al). Again, what components latent variables ought to be broken up into is a question that is answered with theory. Once constructs are fleshed out, theory is still needed to decide how to operationalize them. Measurement can be thought of as a set of rules that guide the assignment of values to subjects (Kerlinger et. al). In an extreme example, a study claimed a ninety-eight rate of attendance for school attendance in an Indian state (King Et. al). Such a high margin is suspicious in any context, let alone a state where low levels of commitment to education was expected. Upon further inspection of the data, it was revealed that the measurement of attendance was only based on the first day of class, when the student first entered school, inflating a measurement of one day to represent seven years. What the rule of measurement should not be, in this example, appears trite. However, rules for measurement are difficult to develop when established theory is absent. Coding, an extremely common method of assigning quantitative values, has an enormous opportunity for misuse without theory to guide the researcher. The reliability of coding hinges on, “Expert judgement,” which often, “Inhibits inter- subjective agreement,” (Morrison Et. al). After data is collected, scholars are then left with the task of making sense of the observations, and hopefully, developing substantive analysis. A major goal of research is, to Page ! of !3 12
  • 4.
    reference the titleof Nate Silver’s book, distinguish the signal and the noise. Inference is done with the primary purpose of distinguishing systematic variation from nonsystematic variation (King Et. al). Unsurprisingly, a lack of established theory murks things up again. This problem is exacerbated by the field’s bent toward non-expiremental research. Kerlinger and Lee warn when, “Conducted without hypotheses, without predictions, [non- experimental] research in which data are just collected and then interpreted, is even more dangerous in its power to mislead,” than data observed in a laboratory setting. Access to large data sets and computing power has exploded, birthing a bread of social scientist who search for gold by aimlessly pandering the entirety of the Pacific Ocean. They often dump huge data sets into models that Achen calls models, “Garbage-Can Probits,” vilifying their scientific value. Instead of attempting to verify (or disprove) a theory, this method of pursuing knowledge leads to, “Post factum explanations [that]… are so flexible,” they are impossible to nullify. Their explanatory power is limited to the narrow context of the data set used and cannot be generalized. This causes issues from determining causality. Without substantial theory, one cannot make an argument for what is systematic and what is not. A case can be made to reject many causal claims in political science on this basis. Experimentation is rare, and having a perfect sample is, for all intents and purposes, impossible. Several techniques for increasing confidence in causal claims exist, but they all largely rely on theory. Pair matching is one example that involves comparing like cases, observing what characteristics are different, and drawing inferences on some system. Comparative politics, as a field, conducts many qualitative Page ! of !4 12
  • 5.
    research projects inthis manner. Knowing what cases to match is only possible when theory provides arguments for what variables are explanatory and which are likely irrelevant. The state of theory, and subsequently research, may now seem bleak. A little ingenuity brought our ancestors fire, so surely political scientists can find some light of their own. Exploratory research efforts and strategic use of present theory may help expedite progress. When theory is practically non-existent, some exploratory research can be of use. This type of research should not be used, and certainly should not take on the form of a garbage can mode. Instead, scholars can conduct more open ended research to formulate subsequent research questions. Fenno refer’s to this type of work as, “Soaking and poking,” and involves hanging about around activities of interest. Such a tactic could take on the form of an open-ended survey or interview. Interviews are especially useful, as they produce in depth answers and allow the researcher to steer the conversation in unforeseen directions in response to the subject. The data collected may not be suitable for quantitative analysis, and generalizations to the entire sample may be impossible due to sample size and non-comparability of data, but it can put scholars in the right direction for developing their theory. On the issue of construct validity, a few recommendations can be made. First, it may be wise to use several different operationalizations of constructs to cross-validate results (Shadish Et. al). Secondly, providing clear descriptions of the constructs used and both their constituted and operational definitions along with the rules of measurement to give make the data of better Page ! of !5 12
  • 6.
    use to futurescholarly works is important. This provides an opportunity for scholars, as much as frustration. In areas where theory is ill developed, there is room to review the previous literature, systemize existing theory, and propose robust definitions and methods of measurement. To increase the leverage of the theory available, developing testable hypotheses based on implications of the theory is incredibly useful (King Et. al). Replication in all sciences it touted as a virtue, and should not be limited to the same studies using the same settings. It may also include testing implications (Kerlinger and Lee). In some cases, the same data sets may be of use! Despite it’s young age, progress has in political science is apparent. Where other disciplines have a bedrock of established theory on which to build, political science’s foundation is still being formed. This presents several problems for developing methods of data collection and analysis, notably when specifying constructs, making quantitative measurements, and identifying spuriousness. The most promising way to maximize the potential use of existing theory is to test it’s implications. Moving forward, it is critical scholars take time to explicitly and comprehensively outline the theories guiding their research so that others may build upon them. Fields of science see the most advancement when research is done in a systematic manner, preventing scholars from wasting resources on redundant questions. Theory is the key for advancement. Page ! of !6 12
  • 7.
    QUESTION #2 Data collectioncan be a daunting task. Elite interviewing in particular is extremely resource heavy as interviews may take hours and require expenses for travel. The rewards of this method, for many, outweigh the costs. Studying elites allows gives scholars an opportunity to focus, “On some fundamental questions about the way society ought to be organized, and the roles of the individuals who comprise it,” (Moyser and Wagstaff, Page 2). Surveys and aggregate data collection may be cheaper, but typically are not useful for learning about the personal policy preferences of elites (S&B, Page 61). A lack of representative sampling and rapport are two main challenges facing a student of elites wishing to advance theory. This essay will discuss each in turn, first discussing how they present difficulties for theory testing, and second discussing strategies for overcoming them. THE SAMPLE PROBLEM Theory testing relies on the ability to generalize observed data to the population at large. In many cases, this involves testing a model with various statistical techniques, which nearly all assume the sample observed is representative of the population of interest. Usually this assumption is met by taking a random sample, but this very likely is not the case when elite interviewing is used for two primary reasons. First, interviews are much more resource intensive than other methods such as aggregate sampling and surveying. This problem has no solution, per se. Instead, it is a matter of balancing depth and breadth and making choices. There is some temptation to choose breadth. Berry points out that, “In elite interviewing the error term is Page ! of !7 12
  • 8.
    largely hidden tothose outside the project while the number of cases, the “n,” is there for all to judge,” (Page 680). A major strength of the interviewing method is it’s ability to uncover more nuanced and lengthy responses. Semi-structured interviews seem to be the sweet spot for researchers, providing enough structure to collect comparable data without sacrificing too much quality. Second, it may be difficult to gain access to elites. Fortunately, some scholars have found ways to find success. The issue of access has increased over the years because, “The time demands on members have increased and because there are more scholars seeking interviews,” than ever before (S&B, Page 63). Fenno postulates that some political elites may decline research requests due to a mistrust of academia, but Sinclair and Brady find this is certainly not the norm (Fenno, Page 63). They also observe that Congressmen are exceptionally easier to access than key decision makers in other governments, as well as the Senate, and provide a unique opportunity for political scientists. When connections exist, they greatly aid in gaining access (Fenno, Page 61). This has potential to create a biased sample as the cases selected have something in common the general population does not, a relationship with the research team. Sending form letters to the sample and following up with a phone call is a common strategy. The letters should briefly state an interest in studying the subject, but should not be specific (Aberach Et. al, Page 12). Persistence is key, as staffers respond well to exhibitions of serious intent and willingness (S&B, Page 64). Page ! of !8 12
  • 9.
    Interviewing staff, whoare generally much more accessible, can provide an alternative to interviewing the elite directly (S&B, Page 64). On average, Aberach found it took 3.97 contacts to arrange an interview with a congressman, but only 1.52 contacts to get an interview with their office. In addition, the definition of “elite” is debated amongst some scholars, and it could be argued that staff themselves are part of the elite as they play a central role in the activities of politics (M&W, page 4). Either way, developing positive and professional relationships with staffers is essential to accessing information either directly or by setting up appointments. RAPPORT Building rapport presents another major challenge for interviewers. Without it, researchers risk gathering data that is not representative of the real world, and consequently unreliable. Making generalizations, and subsequently theory, when the reliability of data is questionable can lead to misleading results. Some subjects may inflate their role in the political process and speak in hyperbole (Berry, Page 680). Others, often called, “Defensive elites,” may omit information and, “Provide at best questionable evidence,” (Moyser and Wagstaff, Page 18). Aberach Et. al argue that data collection in elite interviews, especially open-ended ones, rest greatly upon the, “Receptivity of the respondents,” (Page 6). A variety of solutions is available. However, it is impossible to ever know how complete a picture respondents will provide. One simple way to detect if a respondent is being untruthful is to use multiple sources (Berry, 680). When available, data gleaned from interviews can be Page ! of !9 12
  • 10.
    cross-validated with knownvalues for variables. Unfortunately, this is not possible for most data gathered by the interview methodology. Comparing interviews with one another, and even documents, may at least provide partial verification (S&B, Page 68). In order to minimize these issues, building rapport is essential. Practical advice on how to gain the trust of the subject is abundant. Several authors suggest starting the relationship with the subject by emphasizing their role as a scholar. Fenno starts every first encounter by presenting himself as a, “serious scholar,” who will not, “kiss and tell,” (Page 64). Sinclair and Brady stress that it is important the subject understands, “You are…not a reporter…and that his words will not appear in the newspaper the next morning,” (Page 65). Conducting interviews on a, “Not-for-attribution basis,” may help ease the interviews anxieties. Scholars often make an agreement to not attribute quotes to any individual respondent (S&B, Page 65). Omission of any identifiable qualities may reduce the usefulness of the data set, and such a cost is important to consider. Question order can help build rapport as well. Leech recommends asking relatively easy, non-biographical questions first (Page 666). Personal and sensitive questions should be held until the middle or end of the interview. Restating the respondents answers in their own language may also help indicate the interviewer is listening and provides an opportunity for the respondent to make clarifications. Presuming questions, used sparingly, may also help ease a respondent into answering sensitive questions (Leech, Page 666). Asking, “How much did did your organization give in soft money donations,” instead of, “Did Page ! of !10 12
  • 11.
    you give softmoney,” implies that soft money donations are normal and reduces the stigma that may lead to a less accurate answer. Fenno notes that rapport is greatly aided through time and loyalty. Time and opportunity to display loyalty likely will not arise for scholars conducting more traditional interviews. Fenno gathered data using the observational participant method for data collection. This method shows the most promise for building report but comes with some major drawbacks. It is extremely costly which limits the sample size, making it difficult to draw generalizations to further theory. The method also produces data that is largely non-quantifiable (Fenno, Page 90). The observational participant method is an extreme case that makes the need for a balance between rapport and quality data glaring. There is also a higher risk of the researcher, “Going native,” by coming too close to the situation to be academically objective. The implications rapport has on theory testing depend greatly on the research question at hand. If the scholar is interested in discovering an objective truth, bias and omitted data will hinder one’s ability to test a hypothesis. However, if a scholar is interested in the behavior of elites or in understanding their point of view, this problem is less relevant. Berry also argues that if the research question is narrow in scope, one botched interview will not have as much of an impact on the results (Page 680). Gathering a representative sample and extracting reliable information make the elite interviewing method of data collection challenging, and these are far from the only concerns. Comparability across cases is highly questionable and also restricts the ability to further theory. Page ! of !11 12
  • 12.
    Despite these difficulties,it is difficult to find any other method that has provides as much insight on the decision making processes political leader go through. Page ! of !12 12