Upcoming SlideShare
Loading in...5







Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Scale Scale Document Transcript

    • Scale (social sciences) From Wikipedia, the free encyclopediaJump to: navigation, searchIn the social sciences, scaling is the process of measuring or ordering entities withrespect to quantitative attributes or traits. For example, a scaling technique might involveestimating individuals levels of extraversion, or the perceived quality of products.Certain methods of scaling permit estimation of magnitudes on a continuum, while othermethods provide only for relative ordering of the entities.See level of measurement for an account of qualitatively different kinds of measurementscales. Contents [hide] • 1 Comparative and noncomparative scaling • 2 Composite measures • 3 Data types • 4 Scale construction decisions • 5 Comparative scaling techniques • 6 Non-comparative scaling techniques • 7 Scale evaluation • 8 References • 9 See also • 10 Lists of related topics [edit] Comparative and noncomparative scalingWith comparative scaling, the items are directly compared with each other (example : Doyou prefer Pepsi or Coke?). In noncomparative scaling each item is scaled independentlyof the others (example : How do you feel about Coke?).[edit] Composite measuresComposite measures of variables are created by combining two or more separateempirical indicators into a single measure. Composite measures measure complex
    • concepts more adequately than single indicators, extend the range of scores available andare more efficient at handling multiplie items.In addition to scales, there are two other types of composite measures. Indexes are similarto scales except multiple indicators of a variable are combined into a single measure. Theindex of consumer confidence, for example, is a combination of several measures ofconsumer attitudes. A typology is similar to an index except the variable is measured atthe nominal level.Indexes are constructed by accumulating scores assigned to individual attributes, whilescales are constructed through the assignment of scores to patterns of attributes.While indexes and scales provide measures of a single dimension, typologies are oftenemployed to examine the intersection of two or more dimensions. Typologies are veryuseful analytical tools and can be easily used as independent variables, although sincethey are not unidimensional it is difficult to use them as a dependent variable.[edit] Data typesThe type of information collected can influence scale construction. Different types ofinformation are measured in different ways. See in particular level of measurement. 1. Some data are measured at the nominal level. That is, any numbers used are mere labels : they express no mathematical properties. Examples are SKU inventory codes and UPC bar codes. 2. Some data are measured at the ordinal level. Numbers indicate the relative position of items, but not the magnitude of difference. An example is a preference ranking. 3. Some data are measured at the interval level. Numbers indicate the magnitude of difference between items, but there is no absolute zero point. Examples are attitude scales and opinion scales. 4. Some data are measured at the ratio level. Numbers indicate magnitude of difference and there is a fixed zero point. Ratios can be calculated. Examples include: age, income, price, costs, sales revenue, sales volume, and market share.[edit] Scale construction decisions • What level of data is involved (nominal, ordinal, interval, or ratio)? • What will the results be used for? • Should you use a scale, index, or typology? • What types of statistical analysis would be useful? • Should you use a comparative scale or a noncomparative scale? • How many scale divisions or categories should be used (1 to 10; 1 to 7; -3 to +3)? • Should there be an odd or even number of divisions? (Odd gives neutral center value; even forces respondents to take a non-neutral position.)
    • • What should the nature and descriptiveness of the scale labels be? • What should the physical form or layout of the scale be? (graphic, simple linear, vertical, horizontal) • Should a response be forced or be left optional?[edit] Comparative scaling techniques • Pairwise comparison scale - a respondent is presented with two items at a time and asked to select one (example : Do you prefer Pepsi or Coke?). This is an ordinal level technique when a measurement model is not applied. Krus and Kennedy (1977) elaborated the paired comparison scaling within their domain- referenced model. The Bradley-Terry-Luce (BTL) model (Bradley and Terry, 1952; Luce, 1959) can be applied in order to derive measurements provided the data derived from paired comparisons possess an appropriate structure. Thurstones Law of comparative judgment can also be applied in such contexts. • Rasch model scaling - respondents interact with items and comparisons are inferred between items from the responses to obtain scale values. Respondents are subsequently also scaled based on their responses to items given the item scale values. The Rasch model has a close relation to the BTL model. • Rank-order scale - a respondent is presented with several items simultaneously and asked to rank them (example : Rate the following advertisements from 1 to 10.). This is an ordinal level technique. • Constant sum scale - a respondent is given a constant sum of money, script, credits, or points and asked to allocate these to various items (example : If you had 100 Yen to spend on food products, how much would you spend on product A, on product B, on product C, etc.). This is an ordinal level technique. • Bogardus social distance scale - measures the degree to which a person is willing to associate with a class or type of people. It asks how willing the respondent is to make various associations. The results are reduced to a single score on a scale. There are also non-comparative versions of this scale. • Q-Sort scale - Up to 140 items are sorted into groups based a rank-order procedure. • Guttman scale - This is a procedure to determine whether a set of items can be rank-ordered on an unidimensional scale. It utilizes the intensity structure among several indicators of a given variable. Statements are listed in order of importance. The rating is scaled by summing all responses until the first negative response in the list. The Guttman scale is related to Rasch measurement; specifically, Rasch models bring the Guttman approach within a probabilistic framework.[edit] Non-comparative scaling techniques • Continuous rating scale (also called the graphic rating scale) - respondents rate items by placing a mark on a line. The line is usually labeled at each end. There are sometimes a series of numbers, called scale points, (say, from zero to 100) under the line. Scoring and codification is difficult.
    • • Likert scale - Respondents are asked to indicate the amount of agreement or disagreement (from strongly agree to strongly disagree) on a five- or seven-point scale. The same format is used for multiple questions. • Phrase completion scales - Respondents are asked to complete a phrase on an 11-point response scale in which 0 represents the absence of the theoretical construct and 10 represents the theorized maximum amount of the construct being measured. The same basic format is used for multiple questions. • Semantic differential scale - Respondents are asked to rate on a 7 point scale an item on various attributes. Each attribute requires a scale with bipolar terminal labels. • Stapel scale - This is a unipolar ten-point rating scale. It ranges from +5 to -5 and has no neutral zero point. • Thurstone scale - This is a scaling technique that incorporates the intensity structure among indicators. • Mathematically derived scale - Researchers infer respondents’ evaluations mathematically. Two examples are multi dimensional scaling and conjoint analysis.[edit] Scale evaluationScales should be tested for reliability, generalizability, and validity. Generalizability isthe ability to make inferences from a sample to the population, given the scale you haveselected. Reliability is the extent to which a scale will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated undersimilar circumstances. Alternative forms reliability checks how similar the results are ifthe research is repeated using different forms of the scale. Internal consistency reliabilitychecks how well the individual measures included in the scale are converted into acomposite measure.Scales and indexes have to be validated. Internal validation checks the relation betweenthe individual measures included in the scale, and the composite scale itself. Externalvalidation checks the relation between the composite scale and other indicators of thevariable, indicators not included in the scale. Content validation (also called face validity)checks how well the scale measures what is supposed to measure. Criterion validationchecks how meaningful the scale criteria are relative to other possible criteria. Constructvalidation checks what underlying construct is being measured. There are three variantsof construct validity. They are convergent validity, discriminant validity, andnomological validity (Campbell and Fiske, 1959; Krus and Ney, 1978). The coefficient ofreproducibility indicates how well the data from the individual measures included in thescale can be reconstructed from the composite scale.[edit] References • Bradley, R.A. & Terry, M.E. (1952): Rank analysis of incomplete block designs, I. the method of paired comparisons. Biometrika, 39, 324-345.
    • • Campbell, D. T. & Fiske, D. W. (1959) Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105. • Hodge, D. R. & Gillespie, D. F. (2003). Phrase Completions: An alternative to Likert scales. Social Work Research, 27(1), 45-55. • Hodge, D. R. & Gillespie, D. F. (2005). Phrase Completion Scales. In K. Kempf- Leonard (Editor). Encyclopedia of Social Measurement. (Vol. 3, pp. 53-62). San Diego: Academic Press. • Krus, D. J. & Kennedy, P. H. (1977) Normal scaling of dominance matrices: The domain-referenced model. Educational and Psychological Measurement, 37, 189-193 (Request reprint). • Krus, D. J. & Ney, R. G. (1978) Convergent and discriminant validity in item analysis. Educational and Psychological Measurement, 38, 135-137 (Request reprint). • Luce, R.D. (1959): Individual Choice Behaviours: A Theoretical Analysis. New York: J. Wiley.[edit] See also • Rating scale • Level of measurement • Social research • Marketing • Marketing research • Quantitative marketing research[edit] Lists of related topics • List of marketing topics • List of management topics • List of economics topicsRetrieved from ""Rating scale
    • From Wikipedia, the free encyclopediaJump to: navigation, searchA rating scale is a set of categories designed to elicit information about a quantitativeattribute in social science. Common examples are the Likert scale and 1-10 rating scalesfor which a person selects the number which is considered to reflect the perceived qualityof a product. Contents [hide] • 1 Background • 2 Rating scales used online • 3 Rating Scales Commonly Used to Detect ADHD • 4 References • 5 See also [edit] BackgroundIn Psychometrics, rating scales are often referenced to a statement which expresses anattitude or perception toward something. The most common example of such a ratingscale is the Likert scale, in which a person is asked to select a category label from a listindicating the extent of disagreement or agreement with a statement.The basic feature of any rating scale is that it consists of a number of categories. Theseare usually assigned integers. For example, an example of the use of a Likert scale is asfollows. Statement: I could not live without my iPod. Response options: • 1. Strongly Disagree • 2. Disagree • 3. Agree • 4. Strongly AgreeIt is common to treat the numbers obtained from a rating scale directly as measurementsby calculating averages, or more generally any arithmetic operations. Doing so is nothowever justified. In terms of the levels of measurement proposed by S.S. Stevens, thedata are ordinal categorisations. This means, for example, that to agree strongly with the
    • above statement implies a more favourable perception of iPods than does to agree withthe statement. However, the numbers are not interval-level measurements in Stevensschema, which means that equal differences do not represent equal intervals between thedegree to which one values iPods. For example, the difference between strong agreementand agreement is not necessarily the same as the difference between disagreement andagreement. Strictly, even demonstrating that categories are ordinal requires empiricalevidence based on patterns of responses (Andrich, 1978).More than one rating scale is required to measure an attitude or perception due to therequirement for statistical comparisons between the categories in the polytomous Raschmodel for ordered categories (Andrich, 1978). In terms of Classical test theory, more thanone question is required to obtain an index of internal reliability such as Cronbachs alpha(Cronbach, 1951) which is a basic criterion for assessing the effectiveness of a ratingscale and, more generally, a psychometric instrument.[edit] Rating scales used onlineRating scales are used widely online in an attempt to provide indications of consumeropinions of products. Examples of sites which employ ratings scales are IMDb,, Internet Book List, Yahoo! Movies,, BoardGameGeek, and The Criticker website uses a rating scale from 0 to 100 in orderto obtain "personalised film recommendations".In almost all cases, online rating scales only allow one rating per user per product, thoughthere are exceptions such as, which allows users to rate products in relation toseveral qualities. Most online rating facilities also provide few or no qualitativedescriptions of the rating categories, although again there are exceptions such as Yahoo!Movies which labels each of the categories between F and A+ and BoardGameGeek,which provides explicit descriptions of each category from 1 to 10. Often, only the topand bottom category is described, such as on IMDbs online rating facility.With each user rating a product only once, for example in a category from 1 to 10, thereis no means for evaluating internal reliability using an index such as Cronbachs alpha. Itis therefore impossible to evaluate the validity of the ratings as measures of viewerperceptions. Establishing validity would require establishing both reliability and accuracy(i.e. that the ratings represent what they are supposed to represent).Another fundamental issue is that online ratings usually involve convenience samplingmuch like television polls, i.e., they represent only the conglomeration of those inclinedto submit ratings.Sampling is one factor which can lead to results which have a specific bias or are onlyrelevant to a specific subgroup. To illustrate the importance of such factors, consider anexample. Suppose that a films marketing strategy and reputation is such that 90% of itsaudience are attracted to the particular kind of film; i.e. it does not appeal to a broadaudience. Suppose also that the film is very popular among the audience that does see the
    • film and, in addition, that those who feel most strongly about the film are inclined to ratethe film online. This combination may lead to very high ratings of the film which do notgeneralize beyond the people who actually see the film (or possibly even beyond thosewho actually rate it).Qualitative description of categories is an important feature of a rating scale. Forexample, if only the points 1-10 are given without description, some people may select 10rarely whereas other may select the category often. If, instead, "10" is described as "nearflawless", the category is more likely to mean the same thing to different people. Thisapplies to all categories, not just the extreme points. Even with category descriptions,some may be harsher raters than others. Rater harshness is also a consideration inmarking essays in educational contexts. [1].These issues are also compounded when aggregated statistics such as averages are usedfor lists and rankings of products. User ratings are at best ordinal categorizations. While itis not uncommon to calculate averages or means for such data, doing so cannot bejustified because in calculating averages, equal intervals are required to represent thesame difference between levels of perceived quality. The key problems with aggregatedata based on the kinds of rating scales commonly used online are as follow: • Averages should not be calculated for data of the kind collected. • It is usually impossible to evaluate the reliability or validity of user ratings. • Products are not compared with respect to explicit, let alone common, criteria. • Only users inclined to submit a rating for a product do so. • Data are not usually not published in a form that permits evaluation of the product ratings.[edit] Rating Scales Commonly Used to Detect ADHD 1. ADD-H Comprehensive
    • Teacher Rating Scale (ACTeRS) 2. ADHD Rating Scale 3. BASC-2: Behavior Assessment System for Chilfren, Second Edition 4. Bahavior Rating Inventory of Executive Function (BRIEF) 5. Brown Attention- Deficit Disorder Scales 6. Child Behavior Checklist (CBCL- Teacher) 7. Child Attention Profile (CAP) 8. Child Symptom Inventories (CSI) 9. Conners Teacher and Parent Rating Scales-Revised 10. Home Situations Questionnaire 11. IOWA-Conners Rating Scale 12. Schooll Situations Questionnaire 13. SNAP-IV Rating Scale and SNAP-IV- C Rating Scale 14. Vanderbilt Assessment Scale (Teacher Informant and Parent Informant)[edit] References • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of
    • tests. Psychometrika, 16, 297-333. • Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43, 357-74.[edit] See alsoLevel of measurement From Wikipedia, the free encyclopediaJump to: navigation, searchThe level of measurement of a variable in mathematics and statistics is a classificationthat was proposed in order to describe the nature of information contained withinnumbers assigned to objects and, therefore, within the variable. The levels were proposedby Stanley Smith Stevens in his 1946 article On the theory of scales of measurement.According to Stevens theory of scales, different mathematical operations on variables arepossible, depending on the level at which a variable is measured. Contents [hide] • 1 Classification levels o 1.1 Nominal measurement o 1.2 Ordinal measurement o 1.3 Interval measurement o 1.4 Ratio measurement • 2 Debate on classification scheme • 3 References • 4 External links
    • [edit] Classification levelsAccording to the classification scheme, in statistics the kinds of descriptive statistics andsignificance tests that are appropriate depend on the level of measurement of thevariables concerned.Stevens proposed four levels of measurement: • nominal (or categorical) • ordinal • interval • ratio[edit] Nominal measurementIn this type of measurement, names are assigned to objects as labels. This assignment isperformed by evaluating, by some procedure, the similarity of the to-be-measuredinstance to each of a set of named exemplars or category definitions. The name of themost similar named exemplar or definition in the set is the "value" assigned by nominalmeasurement to the given instance. If two instances have the same name associated withthem, they belong to the same category, and that is the only significance that nominalmeasurements have. For practical data processing the names may be numerals, but in thatcase the numerical value of these numerals is irrelevant. The only comparisons that canbe made between variable values are equality and inequality. There are no "less than" or"greater than" relations among the classifying names, nor operations such as addition orsubtraction. "Nominal measurement" was first identified by psychologist Stanley SmithStevens in the context of a child learning to categorize colors (red, blue and so on) bycomparing the similarity of a perceived color to each of a set of named colors previouslylearned by ostensive definition. Other examples include: geographical location in acountry represented by that countrys international telephone access code, the maritalstatus of a person, or the make or model of a car. The only kind of measure of centraltendency is the mode. Statistical dispersion may be measured with a variation ratio, indexof qualitative variation, or via information entropy, but no notion of standard deviationexists. Variables that are measured only nominally are also called categorical variables.In social research, variables measured at a nominal level include gender, race, religiousaffiliation, political party affiliation, college major, and birthplace.[edit] Ordinal measurementIn this classification, the numbers assigned to objects represent the rank order (1st, 2nd,3rd etc.) of the entities measured. The numbers are called ordinals. The variables arecalled ordinal variables or rank variables. Comparisons of greater and less can be made,in addition to equality and inequality. However operations such as conventional additionand subtraction are still meaningless. Examples include the Mohs scale of mineralhardness; the results of a horse race, which say only which horses arrived first, second,
    • third, etc. but no time intervals; and many measurements in psychology and other socialsciences, for example attitudes like preference, conservatism or prejudice and socialclass. The central tendency of an ordinally measured variable can be represented by itsmode or its median; the latter gives more information.[edit] Interval measurementThe numbers assigned to objects have all the features of ordinal measurements, and inaddition equal differences between measurements represent equivalent intervals. That is,differences between arbitrary pairs of measurements can be meaningfully compared.Operations such as addition and subtraction are therefore meaningful. The zero point onthe scale is arbitrary; negative values can be used. Ratios between numbers on the scaleare not meaningful, so operations such as multiplication and division cannot be carriedout directly. But ratios of differences can be expressed; for example, one difference canbe twice another. The central tendency of a variable measured at the interval level can berepresented by its mode, its median, or its arithmetic mean; the mean gives the mostinformation. Variables measured at the interval level are called interval variables, orsometimes scaled variables, though the latter usage is not obvious and is notrecommended. Examples of interval measures are the year date in many calendars, andtemperature in Celsius scale or Fahrenheit scale.[edit] Ratio measurementThe numbers assigned to objects have all the features of interval measurement and alsohave meaningful ratios between arbitrary pairs of numbers. Operations such asmultiplication and division are therefore meaningful. The zero value on a ratio scale isnon-arbitrary. Variables measured at the ratio level are called ratio variables. Mostphysical quantities, such as mass, length or energy are measured on ratio scales; so istemperature measured in kelvins, that is, relative to absolute zero. The central tendency ofa variable measured at the ratio level can be represented by its mode, its median, itsarithmetic mean, or its geometric mean; as with an interval scale, however, the arithmeticmean gives the most useful information. Social variables of ratio measure include age,length of residence in a given place, number of organizations belonged to or number ofchurch attendances in a particular time.The interval and ratio measurement levels are sometimes collectively called "truemeasurement", although it has been argued that this usage reflects a lack ofunderstanding of the uses of ordinal measurement. Only ratio or interval scales cancorrectly be said to have units of measurement.[edit] Debate on classification schemeThere has been, and continues to be, debate about the merits of the classifications,particularly in the cases of the nominal and ordinal classifications (Michell, 1986). Thus,while Stevens classification is widely adopted, it is not universally accepted (forexample, Velleman & Wilkinson, 1993). [1]
    • Among those who accept the classification scheme, there is also some controversy inbehavioural sciences over whether the mean is meaningful for ordinal measurement. Interms of measurement theory, it is not, because the arithmetic operations are not made onnumbers that are measurements in units, and so the results of compuations do not givenumbers in units. However, many behavioural scientists use means for ordinal dataanyway. This is often justified on the basis that ordinal scales in behavioural science arereally somewhere between true ordinal and interval scales; although the intervaldifference between two ordinal ranks is not constant, it is often of the same order ofmagnitude. For example, applications of measurement models in educational contextsoften indicate that total scores have a fairly linear relationship with measurements acrossa range of an assessment. Thus, some argue, that so long as the unknown intervaldifference between ordinal scale ranks is not too variable, interval scale statistics such asmeans can meaningfully be used on ordinal scale variables.L. L. Thurstone made progress toward developing a justification for obtaining interval-level measurements based on the law of comparative judgment. Further progress wasmade by Georg Rasch, who developed the probabilistic Rasch model which provides atheoretical basis and justification for obtaining interval-level measurements from countsof observations such as total scores on assessments.[edit] References • Babbie, E., The Practice of Social Research, 10th edition, Wadsworth, Thomson Learning Inc., ISBN 0-534-62029-9 • Michell, J. (1986). Measurement scales and statistics: a clash of paradigms. Psychological Bulletin, 3, 398-407. • Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677-680. • Stevens, S.S. (1951). Mathematics, measurement and psychophysics. In S.S. Stevens (Ed.),
    • Handbook of experimental psychology (pp. 1-49). New York: Wiley. • Velleman, P. F. & Wilkinson, L. (1993). Nominal, ordinal, interval, and ratio typologies are misleading. The American Statistician, 47(1), 65-72. [On line] /research/wilkinson/ Publications/Stevens. pdf[edit] External linksSocial research From Wikipedia, the free encyclopediaJump to: navigation, searchSocial research refers to research conducted by social scientists (primarily withinsociology and social psychology), but also within other disciplines such as social policy,human geography, political science, social anthropology and education. Sociologists andother social scientists study diverse things: from census data on hundreds of thousands ofhuman beings, through the in-depth analysis of the life of a single important person tomonitoring what is happening on a street today - or what was happening a few hundredyears ago.Social scientists use many different methods in order to describe, explore and understandsocial life. Social methods can generally be subdivided into two broad categories.Quantitative methods are concerned with attempts to quantify social phenomena andcollect and analyse numerical data, and focus on the links among a smaller number ofattributes across many cases. Qualitative methods, on the other hand, emphasise personalexperiences and interpretation over quantification, are more concerned with
    • understanding the meaning of social phenomena and focus on links among a largernumber of attributes across relatively few cases. While very different in many aspects,both qualitative and quantitative approaches involve a systematic interaction betweentheories and data.Common tools of quantitative researchers include surveys, questionnaires, and secondaryanalysis of statistical data that has been gathered for other purposes (for example,censuses or the results of social attitudes surveys). Commonly used qualitative methodsinclude focus groups, participant observation, and other techniques. Contents [hide] • 1 Ordinary human inquiry • 2 Foundations of social research o 2.1 Types of explanations o 2.2 Types of inquiry • 3 Quantitative / qualitative debate • 4 Paradigms • 5 The ethics of social research • 6 See also o 6.1 Social research organisations o 6.2 Social research projects o 6.3 Social research techniques • 7 Notes • 8 References • 9 External links [edit] Ordinary human inquiryBefore the advent of sociology and application of the scientific method to social research,human inquiry was mostly based on personal experiences, and received wisdom in theform of tradition and authority. Such approaches often led to errors such as inaccurateobservations, overgeneralisation, selective observations, subjectivity and lack of logic.[edit] Foundations of social researchSocial research (and social science in general) is based on logic and empiricalobservations. Charles C. Ragin writes in his Constructing Social Research book that"Social research involved the interaction between ideas and evidence. Ideas help socialresearchers make sense of evidence, and researchers use evidence to extend, revise andtest ideas". Social research thus attempts to create or validate theories through datacollection and data analysis, and its goal is exploration, description and explanation. It
    • should never lead or be mistaken with philosophy or belief. Social research aims to findsocial patterns of regularity in social life and usually deals with social groups (aggregatesof individuals), not individuals themselves (although science of psychology is anexception here). Research can also be divided into pure research and applied research.Pure research has no application on real life, whereas applied research attempts toinfluence the real world.There are no laws in social science that parallel the laws in the natural science. A law insocial science is a universal generalization about a class of facts. A fact is an observedphenomenon, and observation means it has been seen, heard or otherwise experienced byresearcher. A theory is a systematic explanation for the observations that relate to aparticular aspect of social life. Concepts are the basic building blocks of theory and areabstract elements representing classes of phenomena. Axioms or postulates are basicassertions assumed to be true. Propositions are conclusions drawn about the relationshipsamong concepts, based on analysis of axioms. Hypotheses are specified expectationsabout empirical reality which are derived from propositions. Social research involvestesting these hypotheses to see if they are true.Social research involves creating a theory, operationalization (measurement of variables)and observation (actual collection of data to test hypothesized relationship).Social theories are written in the language of variables, in other words, theories describelogical relationships between variables. Variables are logical sets of attributes, withpeople being the carriers of those variables (for example, gender can be a variable withtwo attributes: male and female). Variables are also divided into independent variables(data) that influences the dependent variables (which scientists are trying to explain). Forexample, in a study of how different dosages of a drug are related to the severity ofsymptoms of a disease, a measure of the severity of the symptoms of the disease is adependent variable and the administration of the drug in specified doses is theindependent variable. Researchers will compare the different values of the dependentvariable (severity of the symptoms) and attempt to draw conclusions.[edit] Types of explanationsExplanations in social theories can be idiographic or nomothetic. An idiographicapproach to an explanation is one where the scientists seek to exhaust the idiosyncraticcauses of a particular condition or event, i.e. by trying to provide all possibleexplanations of a particular case. Nomothetic explanations tend to be more general withscientists trying to identify a few causal factors that impact a wide class of conditions orevents. For example, when dealing with the problem of how people choose a job,idiographic explanation would be to list all possible reasons why a given person (orgroup) chooses a given job, while nomothetic explanation would try to find factors thatdetermine why job applicants in general choose a given job.[edit] Types of inquiry
    • Social research can be deductive or inductive. The inductive inquiry (also known asgrounded research) is a model in which general principles (theories) are developed fromspecific observations. In deductive inquiry specific expectations of hypothesis aredeveloped on the basis of general principles (i.e. social scientists start from an existingtheory, and then search for proof). For example, in inductive research, if a scientist findsthat some specific religious minorities tend to favour a specific political view, he maythen extrapolate this to the hypothesis that all religious minorities tend to have the samepolitical view. In deductive research, a scientist would start from a hypothesis thatreligious affiliation influenced political views and then begin observations to prove histheory.[edit] Quantitative / qualitative debateThere is usually a trade off between the number of cases and the number of theirvariables that social research can study. Qualitative research usually involves few caseswith many variables, while quantitative involves many phenomena with few variables.There is some debate over whether "quantitative research" and "qualitative research"methods can be complementary: some researchers argue that combining the twoapproaches is beneficial and helps build a more complete picture of the social world,while other researchers believe that the epistemologies that underpin each of theapproaches are so divergent that they cannot be reconciled within a research project.While quantitative methods are based on a natural science, positivist model of testingtheory, qualitative methods are based on interpretivism and are more focused aroundgenerating theories and accounts. Positivists treat the social world as something that isout there, external to the social scientist and waiting to be researched. Interpretivists, onthe other hand believe that the social world is constructed by social agency and thereforeany intervention by a researcher will affect social reality. Herein lies the supposedconflict between quantitative and qualitative approaches - quantitative approachestraditionally seek to minimise intervention in order to produce valid and reliablestatistics, whereas qualitative approaches traditionally treat intervention as something thatis necessary (often arguing that participation can lead to a better understanding of a socialsituation).However, it is increasingly recognised that the significance of these differences shouldnot be exaggerated and that quantitative and qualitative approaches can becomplementary. They can be combined in a number of ways, for example: 1. Qualitative methods can be used in order to develop quantitative research tools. For example, focus groups could be used to explore an
    • issue with a small number of people and the data gathered using this method could then be used to develop a quantitative survey questionnaire that could be administered to a far greater number of people allowing results to be generalised.2. Qualitative methods can be used to explore and facilitate the interpretation of relationships between variables. For example researchers may inductively hypothesize that there would be a positive relationship between positive attitudes of sales staff and the amount of sales of a store. However, quantitative, deductive, structured observation of 576 convenience stores could reveal that this was not the case, and in order to understand why the relationship between the variables was negative the researchers may undertake qualitative case studies of four stores including
    • participant observation. This might abductively confirm that the relationship was negative, but that it was not the positive attitude of sales staff that led to low sales, but rather that high sales led to busy staff who were less likely to be express positive emotions at work![1]Quantitative methods are useful for describing social phenomena, especially on a largerscale. Qualitative methods allow social scientists to provide richer explanations (anddescriptions) of social phenomena, frequently on a smaller scale. By using two or moreapproaches researchers may be able to triangulate their findings and provide a morevalid representation of the social world.A combination of different methods are often used within "comparative research", whichinvolves the study of social processes across nation-states, or across different types ofsociety.[edit] ParadigmsSocial scientists usually follow one or more of the several specific sociological paradigms(points of view): • conflict paradigm focuses on the ability of some groups to dominate others, or resistance to such domination. • ethnomethodology paradigm examines how people make sense out of social life in the process of living it, as if each was a researcher engaged in enquiry.
    • • feminist paradigm focuses on how male dominance of society has shaped social life. • Darwinism paradigm sees a progressive evolution in social life. • positivism paradigm was an early 19th century approach, now considered obsolete in its pure form. Positivists believed we can scientifically discover all the rules governing social life. • structural functionalism paradigm also known as a social systems paradigm addresses what functions various elements of the social system perform in regard to the entire system. • symbolic interactionism paradigm examines how shared meanings and social patterns are developed in the course of social interactions.Of these, the conflict paradigm of Karl Marx, symbolic interactionism of Max Weber andstructural functionalism of Emile Durkheim are the most well known.[edit] The ethics of social researchTwo main assumptions of the ethics in social research are:
    • • voluntary participation • no harm to subjects[edit] See also • Analytic frame • Scale (social sciences) • Program evaluation[edit] Social research organisations • Centre for Rural Social Research, Australia • Economic and Social Research Council, United Kingdom (Research Funding Council) • Institute for Public Policy and Social Research, USA • Institute for Social Research, Germany • Mass-Observation, United Kingdom • Matrix Research & Consultancy Limited, United Kingdom • Melbourne Institute of Applied Economic and Social Research, Australia • National Centre for Social Research, United Kingdom • National Opinion Research Center, USA • New School for Social Research, New York City
    • • Mada al-Carmel - The Arab Center for Applied Social Research, Haifa, Israel[edit] Social research projects • Radio Project, USA, 1937 • The Global Social Change Research Project[edit] Social research techniques • Quantitative methods o structured interviewing o statistical surveys and questionnaire s o structured observation o content analysis o secondary analysis o Quantitative marketing research • Qualitative methods o analytic induction o ethnography o focus groups o morphologica l analysis o participant observation o semi- structured interviewing
    • o unstructured interviewing o textual analysis o theoretical sampling[edit] NotesQuantitative marketing research From Wikipedia, the free encyclopediaJump to: navigation, searchQuantitative marketing research is the application of quantitative research techniquesto the field of marketing. It has roots in both the positivist view of the world, and themodern marketing viewpoint that marketing is an interactive process in which both thebuyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product,Price, Place (location) and Promotion. As a social research method, it typically involvesthe construction of questionnaires and scales. People who respond (respondents) areasked to complete the survey. Marketers use the information so obtained to understandthe needs of individuals in the marketplace, and to create strategies and marketing plans. Contents [hide] • 1 See also • 2 Scope and requirements • 3 Typical general procedure • 4 Descriptive techniques • 5 Inferential techniques • 6 Types of hypothesis tests • 7 Reliability and validity • 8 Types of errors • 9 References • 10 See also • 11 List of related topics [edit] See also
    • • Quantitative research • Qualitative research[edit] Scope and requirementsBoth descriptive and inferential statistical techniques can be used to analyse data anddraw conclusions. It involves a quantity of respondents sometimes ranging in numberfrom ten to ten million, and may include hypotheses, random sampling techniques toenable inference from the sample to the population. Marketing research may include bothexperimental and quasi-experimental research designs.[edit] Typical general procedureSimply, there are five major and important steps involved in the research process: 1. Defining the Problem. 2. Research Design. 3. Data Collection. 4. Analysis. 5. Report Writing & presentation.The brief discussion on each of these steps are: 1. Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed? 2. Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours?
    • 3. Hypothesis specification - What claim(s) do we want to test?4. Research design specification - What type of methodology to use? - examples: questionnaire, survey5. Question specification - What questions to ask? In what order?6. Scale specification - How will preferences be rated?7. Sampling design specification - What is the total population? What sample size is necessary for this population? What sampling method to use?- examples: cluster sampling, stratified sampling, simple random sampling, multistage sampling, systematic sampling, nonprobability sampling8. Data collection - Use mail, telephone, internet, mall intercepts9. Codification and re- specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples:
    • assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization 10. Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance. 11. Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research? 12. Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10 minute presentation. Be prepared for questions.n[edit] Descriptive techniques
    • The descriptive techniques that are commonly used include: • Graphical description o use graphs to summarize data o examples: histograms, scattergrams, bar charts, pie charts • Tabular description o use tables to summarize data o examples: frequency distribution schedule, cross tabs • Parametric description o estimate the values of certain parameters which summarize the data  measu res of locati on or centra l tende ncy  ari th m eti c m ea n
    •  m ed ia n  m od e  int er qu art ile m ea n measu res of statist ical disper sion  st an da rd de vi ati on  ra ng e (st ati sti cs )  int er qu art ile ra ng e
    •  ab so lut e de vi ati on .  measu res of the shape of the distrib ution  sk e w ne ss  ku rt os is[edit] Inferential techniquesInferential techniques involve generalizing from a sample to the whole population. It alsoinvolves testing a hypothesis. A hypothesis must be stated in mathematical/statisticalterms that make it possible to calculate the probability of possible samples assuming thehypothesis is correct. Then a test statistic must be chosen that will summarize theinformation in the sample that is relevant to the hypothesis. A null hypothesis is ahypothesis that is presumed true until a hypothesis test indicates otherwise. Typically it isa statement about a parameter that is a property of a population. The parameter is often amean or a standard deviation.Not unusually, such a hypothesis states that the parameters, or mathematicalcharacteristics, of two or more populations are identical. For example, if we want tocompare the test scores of two random samples of men and women, the null hypothesiswould be that the mean score in the male population from which the first sample wasdrawn, was the same as the mean score in the female population from which the secondsample was drawn: H0:μ1 = μ2
    • where: H0 = the null hypothesis μ1 = the mean of population 1, and μ2 = the mean of population 2.The equality operator makes this a two-tailed test. The alternative hypothesis can beeither greater than or less than the null hypothesis. In a one-tailed test, the operator is aninequality, and the alternative hypothesis has directionality: H0:μ1 = or < μ2These are sometimes called a hypothesis of significant difference because you aretesting the difference between two groups with respect to one variable.Alternatively, the null hypothesis can postulate that the two samples are drawn from thesame population: H0:μ1 − μ2 = 0A hypothesis of association is where there is one population, but two traits beingmeasured. It is a test of association of two traits within one group.The distribution of the test statistic is used to calculate the probability sets of possiblevalues (usually an interval or union of intervals). Among all the sets of possible values,we must choose one that we think represents the most extreme evidence against thehypothesis. That is called the critical region of the test statistic. The probability of thetest statistic falling in the critical region when the hypothesis is correct is called the alphavalue of the test. After the data is available, the test statistic is calculated and wedetermine whether it is inside the critical region. If the test statistic is inside the criticalregion, then our conclusion is either the hypothesis is incorrect, or an event of probabilityless than or equal to alpha has occurred. If the test statistic is outside the critical region,the conclusion is that there is not enough evidence to reject the hypothesis.The significance level of a test is the maximum probability of accidentally rejecting atrue null hypothesis (a decision known as a Type I error).For example, one may choose asignificance level of, say, 5%, and calculate a critical value of a statistic (such as themean) so that the probability of it exceeding that value, given the truth of the nullhypothesis, would be 5%. If the actual, calculated statistic value exceeds the criticalvalue, then it is significant "at the 5% level".[edit] Types of hypothesis tests • Parametric tests of a single sample:
    • o t test o z test • Parametric tests of two independent samples: o two-group t test o z test • Parametric tests of paired samples: o paired t test • Nominal/ordinal level test of a single sample: o chi-square o Kolmogorov- Smirnov one sample test o runs test o binomial test • Nominal/ordinal level test of two independent samples: o chi-square o Mann- Whitney U o Median o Kolmogorov- Smirnov two sample test • Nominal/ordinal level test for paired samples: o Wilcoxon test o McNemar testPoint to remember: • o If the sample is interval/ ratio scaled and meet
    • some statistical assumption (e.g. Normality), then it is eligible for Parametric test. o If the sample is Nominal/ Ordinal scaled and/ or does not meet some statistical assumption (e.g. Normality), then it is not eligible for Parametric test. In this situation we have to use Non- parametric test.We should use non-parametric test only if sample is not eligible for parametric test.Remember that the non-parametric test is mostly used and misused technique in theworld.[edit] Reliability and validityResearch should be tested for reliability, generalizability, and validity. Generalizability isthe ability to make inferences from a sample to the population.Reliability is the extent to which a measure will produce consistent results. Test-retestreliability checks how similar the results are if the research is repeated under similarcircumstances. Stability over repeated measures is assessed with the Pearson coefficient.Alternative forms reliability checks how similar the results are if the research is repeatedusing different forms. Internal consistency reliability checks how well the individualmeasures included in the research are converted into a composite measure. Internalconsistency may be assessed by correlating performance on two halves of a test (split-half reliability). The value of the Pearson product-moment correlation coefficient is
    • adjusted with the Spearman-Brown prediction formula to correspond to the correlationbetween two full-length tests. A commonly used measure is Cronbachs α, which isequivalent to the mean of all possible split-half coefficients. Reliability may be improvedby increasing the sample size.Validity asks whether the research measured what it intended to. Content validation (alsocalled face validity) checks how well the content of the research are related to thevariables to be studied. Are the research questions representative of the variables beingresearched. It is a demonstration that the items of a test are drawn from the domain beingmeasured. Criterion validation checks how meaningful the research criteria are relative toother possible criteria. When the criterion is collected later the goal is to establishpredictive validity. Construct validation checks what underlying construct is beingmeasured. There are three variants of construct validity. They are convergent validity(how well the research relates to other measures of the same construct), discriminantvalidity (how poorly the research relates to measures of opposing constructs), andnomological validity (how well the research relates to other variables as required bytheory) .Internal validation, used primarily in experimental research designs, checks the relationbetween the dependent and independent variables. Did the experimental manipulation ofthe independent variable actually cause the observed results? External validation checkswhether the experimental results can be generalized.Validity implies reliability : a valid measure must be reliable. But reliability does notnecessarily imply validity :a reliable measure need not be valid.[edit] Types of errorsRandom sampling errors: • sample too small • sample not representative • inappropriate sampling method used • random errorsResearch design errors: • bias introduced • measurement error • data analysis error • sampling frame error • population definition error
    • • scaling error • question construction errorInterviewer errors: • recording errors • cheating errors • questioning errors • respondent selection errorRespondent errors: • non-response error • inability error • falsification errorHypothesis errors: • type I error (also called alpha error) o the study results lead to the rejection of the null hypothesis even though it is actually true • type II error (also called beta error) o the study results lead to the acceptance (non- rejection) of the null hypothesis even though it is actually false[edit] References
    • • Bradburn, Norman M. and Seymour Sudman. Polls and Surveys: Understanding What They Tell Us (1988) • Converse, Jean M. Survey Research in the United States: Roots and Emergence 1890-1960 (1987), the standard history • Glynn, Carroll J., Susan Herbst, Garrett J. OKeefe, and Robert Y. Shapiro. Public Opinion (1999) textbook • Oskamp, Stuart and P. Wesley Schultz; Attitudes and Opinions (2004) • James G. Webster, Patricia F. Phalen, Lawrence W. Lichty; Ratings Analysis: The Theory and Practice of Audience Research Lawrence Erlbaum Associates, 2000 • Young, Michael L. Dictionary of Polling: The Language of Contemporary Opinion Research (1992)[edit] See also • Enterprise Feedback Management • marketing research
    • • Qualtrics • Statistical survey • Rating scale • Master of Marketing Research[edit] List of related topics • list of marketing topics • list of management topics • list of economics topics • list of finance topicsMarketing research From Wikipedia, the free encyclopediaJump to: navigation, search
    • Marketing Key concepts Product / Price / Promotion Placement / Service / Retail Marketing research Marketing strategy Marketing management Promotional contentWikibooks has more about this subject: Advertising / BrandingMarketing Direct marketing / Personal Sales Product placement / Public relationsResearch is the search for and retrieval of Publicity / Sales promotionexisting, discovery or creation of new information Promotional mediaor knowledge for a specific purpose. Research has Printing / Publication / Broadcastingmany categories, from medical research to literary Out-of-home / Internet marketingresearch. Marketing research is a form of Point of sale / Novelty itemsbusiness research. and Business-to-Business In-game / Word of mouth This box: view • talk • edit(B2B)Marketing Research, or Business MarketingResearch, previously known as Industrial Marketing Research.B2B Marketing Research investigates the markets for products sold by one business toanother, rather than to consumers.Consumer Marketing Research is a form of applied sociology which concentrates onunderstanding the behaviours, whims and preferences, of consumers in a market-basedeconomy. The field of consumer marketing research as a statistical science was pioneeredby Arthur Nielsen with the founding of the ACNielsen Company in 1923.In addition to marketing research, other forms of business research include: • Market research is broader in scope and examines all aspects of a business environment. It asks questions about competitors, market structure, government regulations, economic trends, technological advances, and numerous other factors that make up the business environment. (See Environmental scanning.) Sometimes the term refers more
    • particularly to the financial analysis of companies, industries, or sectors. In this case, financial analysts usually carry out the research and provide the results to investment advisors and potential investors. • Product research - This looks at what products can be produced with available technology, and what new product innovations near-future technology can develop. (see New Product Development) • Advertising research - This attempts to assess the likely impact of an advertising campaign in advance, and also measure the success of a recent campaign..Contents[hide] • 1 Types of marketing research • 2 Marketing research methods • 3 Business to business market research • 4 Commonly used marketing research terms • 5 Education in Marketing Research • 6 References
    • • 7 See also • 8 External links [edit] Types of marketing researchMarketing research techniques come in many forms, including: • test marketing - a small-scale product launch used to determine the likely acceptance of the product when it is introduced into a wider market • concept testing - to test the acceptance of a concept by target consumers • mystery shopping - An employee or representative of the market research firm anonymously contacts a salesperson and indicates he or she is shopping for a product. The shopper then records the entire experience. This method is often used for quality control or for researching competitors products. • store audit - to measure the sales of a product or product line at a statistically selected store sample
    • in order to determine market share, or to determine whether a retail store provides adequate service• demand estimation - to determine the approximate level of demand for the product• Commercial eye tracking research - examine advertisements, package designs, websites, etc by analyzing visual behavior of the consumer• sales forecasting - to determine the expected level of sales given the level of demand. With respect to other factors like Advertising expenditure, sales promotion etc.• customer satisfaction studies - exit interviews or surveys that determine a customers level of satisfaction with the quality of the transaction• distribution channel audits - to assess distributors’
    • and retailers’ attitudes toward a product, brand, or company• price elasticity testing - to determine how sensitive customers are to price changes• segmentation research - to determine the demographic, psychographic, and behavioural characteristics of potential buyers• consumer decision process research - to determine what motivates people to buy and what decision-making process they use• positioning research - how does the target market see the brand relative to competitors? - what does the brand stand for?• brand name testing - what do consumers feel about the names of the products?• brand equity research - how favorably do consumers view the brand?
    • • advertising and promotion research - how effective are ads - do potential customers recall the ad, understand the message, and does the ad influence consumer purchasing behaviour? • Internet Strategic Intelligence - searching for customer opinions in the Internet: chats, forums, web pages, blogs... where people express freely about their experiences with products, becoming strong "opinion formers" • Marketing Effectiveness and analytics - Building models and measuring results to determine the effectiveness of individual marketing activitiesAll of these forms of marketing research can be classified as either problem-identification research or as problem-solving research.A company collects primary research by gathering original data. Secondary researchis conducted on data published previously and usually by someone else. Secondaryresearch costs far less than primary research, but seldom comes in a form that exactlymeets the needs of the researcher.A similar distinction exists between exploratory research and conclusive research.Exploratory research provides insights into and comprehension of an issue or situation.It should draw definitive conclusions only with extreme caution. Conclusive researchdraws conclusions: the results of the study can be generalized to the whole population.
    • Exploratory research is conducted to explore a problem to get some basic idea about thesolution at the preliminary stages of research. It may serve as the input to conclusiveresearch. Exploratory research information is collected by focus group interviews,reviewing literature or books, discussing with experts, etc. This is unstructured andqualitative in nature. If a secondary source of data is unable to serve the purpose, aconvenience sample of small size can be collected. Conclusive research is conducted todraw some conclusion about the problem. It is essentially, structured and quantitativeresearch, and the output of this research is the input to Management information systems(MIS).Exploratory research is also conducted to simplify the findings of the conclusive/descriptive research, if the findings are very hard to interpret for Marketing Manager.Some times it may happen to conduct conclusive research with out followed by aexploratory research. e.g. Consumer satisfaction Study. Because every year this studyconducted by some one and initial ideas concern to the study may readily available.[edit] Marketing research methodsMethodologically, marketing research uses the following types of research designs:[1]A-BASED ON QUESTIONING: • Qualitative marketing research - generally used for exploratory purposes - small number of respondents - not generalizable to the whole population - statistical significance and confidence not calculated - examples include focus groups, in- depth interviews, and projective techniques • Quantitative marketing research - generally used to draw conclusions - tests a specific hypothesis - uses random sampling
    • techniques so as to infer from the sample to the population - involves a large number of respondents - examples include surveys and questionnairesB-BASED ON OBSERVATIONS: • Ethnographic studies -, by nature qualitative, the researcher observes social phenomena in their natural setting - observations can occur cross- sectionally (observations made at one time) or longitudinally (observations occur over several time- periods) - examples include product-use analysis and computer cookie traces • Experimental techniques -, by nature quantitative, the researcher creates a quasi- artificial environment to try to control spurious factors, then manipulates at least one of the variables - examples include purchase laboratories and test markets
    • Researchers often use more than one research design. They may start with secondaryresearch to get background information, then conduct a focus group (qualitative researchdesign) to explore the issues. Finally they might do a full nation-wide survey(quantitative research design) in order to devise specific recommendations for the client.[edit] Business to business market researchBusiness to business (b2b) research is inevitably more complicated than consumerresearch. The researchers need to know what type of multi-faceted approach will answerthe objectives, since seldom is it possible to find the answers using just one method.Finding the right respondents is crucial in b2b research since they are often busy, andmay not want to participate. Encouraging them to “open up” is yet another skill requiredof the b2b researcher. Last, but not least, most business research leads to strategicdecisions and this means that the business researcher must have expertise in developingstrategies that are strongly rooted in the research findings and acceptable to the client.There are four key factors that make b2b market research special and different toconsumer markets:[2] • The decision making unit is far more complex in b2b markets than in consumer markets • B2b products and their applications are more complex than consumer products • B2b marketers address a much smaller number of customers who are very much larger in their consumption of products than is the case in consumer markets • Personal relationships are of critical importance in b2b markets.[edit] Commonly used marketing research terms
    • Market research techniques resemble those used in political polling and social scienceresearch. Meta-analysis (also called the Schmidt-Hunter technique) refers to a statisticalmethod of combining data from multiple studies or from several types of studies.Conceptualization means the process of converting vague mental images into definableconcepts. Operationalization is the process of converting concepts into specificobservable behaviors that a researcher can measure. Precision refers to the exactness ofany given measure. Reliability refers to the likelihood that a given operationalizedconstruct will yield the same results if re-measured. Validity refers to the extent to whicha measure provides data that captures the meaning of the operationalized construct asdefined in the study. It asks, “Are we measuring what we intended to measure?”Applied research sets out to prove a specific hypothesis of value to the clients paying forthe research. For example, a cigarette company might commission research that attemptsto show that cigarettes are good for ones health. Many researchers have ethicalmisgivings about doing applied research.Sugging (or Selling Under the Guise of market research) forms a sales technique inwhich sales people pretend to conduct marketing research, but with the real purpose ofobtaining buyer motivation and buyer decision-making information to be used in asubsequent sales call.Frugging comprises the practice of soliciting funds under the pretense of being aresearch organization.[edit] Education in Marketing ResearchThere are a number of excellent education opportunities, mostly of them offered byUniversities or main Business Schools. A convenient and flexible approach is distancelearning. "Principles of Marketing Research" [3] is a well-known and highly respectedonline course on Marketing Research run by the University of Georgia and supported bythe Marketing Research Association (USA) [4] and ESOMAR (an internationalorganization with headquarters in The Netherlands) [5]. It is based on the MarketingResearch Core Body of Knowledge (MRCBOK©), a standard for education in MarketingResearch. Recently, a specialized course in Pharmaceutical Market Research waslaunched ("Principles of Marketing Research - Pharmaceutical Supplements")[6][edit] References