Research methodology


Published on

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Research methodology

  1. 1. Research Methodology Quantitative Research
  2. 2. The Section covers the following areas Overview of the Research Process: Formulating a Research Problem  Identifying variables  Constructing Hypotheses  The Research design  Selecting a method of data collection  Collecting data using attitudinal scales  Establish the validity & reliability of a Research instrument
  3. 3. Formulating a Research problem  The first step in beginning a research project is to decide “       What is the Research Question”? A “Research Question” is a question about the problem to be addressed , it is therefore focused on the content of the topic of interest. First identifying & then specifying a research problem might seem like research tasks that ought to b easy & quality accomplished. However, such is often not the case (Yegidis & Weinback,1991,35) It is essential for the problem you formulate to b able to withstand scrutiny in terms of the procedures required to be undertaken. Always researcher should send a considerable time in thinking it through Researcher should always have a clear idea with regard to what is that want to find out about & not what they think they much find This is the most critical step in the Research
  4. 4. Sources of Research Problem  If the Researcher has already selected the topic or question, next step is to identify the sources of Research Problem.  Most research in the humanities resolves around four “P”s – – – – People - Study Population Problems Subject Area Programs Phenomena
  5. 5. Every Research study has 2 aspects, 1. The Study Population 2.The Subject Area  Aspects of a study Study Population About Study of People Individuals, organizations, groups, communities Subject area Problem Issues, Situations, Needs, Associations, Profiles etc Contents, Structure, outcomes, attributes, satisfaction, consumers Program Phenomenon Cause and effect They provide with rte required information or collect information from or about them Information that need to collect to find answers to the research question
  6. 6. Considering in selecting a Research problem        Interest – should be the most important consideration in selecting a problem Magnitude – Having sufficient knowledge about the research process Measurement of process- It the researcher is using a concept in th study should make sure to be very clear about its indicators & measurements Level of Expertise – Having adequate level of expertise for the task which is proposed Relevance- Select a topic that is of relevance to as a professional Availability of data Ethical Issues- Formulating a research problem is the Ethical issues involved
  7. 7. Steps in the formulation of a Research Problem        Step 1 – Identify a broad field or subjects are of interest to you Step 2-Dissect the broad are into a sub areas Step 3 –Select what is of most interest to you Step 4-Raise research question Step 5 –Formulate Objectives Step 6 – Assess your objectives Step 7- Double –check
  8. 8. The formulation of Objectives  Objectives are the goals Researcher set out to attain in the study  Objectives should b listed under 2 headings 1.Min objectives – An overall statement of the thrust of the study 2. Sub objectives –Specific aspects of the topic should be listed numerically  Characteristics of Objectives:-  1.Clear 2.Complete 3.Spcific 4.Identify the min variables to be correlated 5.Identify the directions of the relationship    
  9. 9. Establishing Operational Definitions       The Researcher needs to develop operational definitions for the major concepts you are using in the study & develop a framework for the study population enabling the researcher to select appropriate respondents Operational definitions give an operational meaning to the study population & the concepts used Eg:- The main objective is , To measure the effectiveness o a retraining program designed to help Young People It is equally important to decide exactly what you mean by “young” up to what age will you consider a person to be Child, Who would you consider Young? Operationalisation of concepts & the study population :Study Concept Issues 1 Effectiveness What constitutes ”Effectiveness” Population to be studied The Young Who would consider a Young person
  10. 10. Identifying Variables The Definition of a Variable  An image , perception or concept that is capable of measurement , hence capable of taking on different values is called a variable Or  A concept that can be measured Eg : 1. This program is effective 2.We are providing a Quality service to our clients 3.This product is doing well
  11. 11. The difference between a concept & a variable   Concepts are mental images or perceptions and therefore their meanings vary markedly from individual to individual Where as , Variables are measurable , of course with varying degrees of accuracy Concepts Variables Effectiveness Gender (Male/female) Satisfaction Rich Attitude Excellent Weight -Subjective impression Measurable through the degree of precision varies from scale to scale from variable to variable No uniformity as to its understanding among different people As such cannot be measured Height
  12. 12. Concepts , indicators and Variables  If a concept is used in a study , the researcher need to consider its operationalisation, that is , how it will be measured  To operationlise a concept , first need to go through the process of identifying indicators , a set of criteria reflective of the concept which can then be converted into variables .
  13. 13. Types of Variable     Independent variable – The cause supposed to be responsible for bringing about changes in a phenomenon or situation Dependent variable – The outcome of the changes brought about by introduction of an independent variable Extraneous variable- several other factors operating in a real life situation may affect changes in the dependent variable. These factors , not measured in the study, may increase or decrease the magnitude or strength o the relationship between independent and dependent variables Intervening variable – This links the independent and dependent variable.
  14. 14. Types measurement scale There are four measurement categories 1. Nominal or classificatory scale – Each subgroup has a characteristic /property which is common to all classified within that subgroups Eg. Tree, House, Taxi, Gender :Male/Female 2. Ordinal or ranking scale – It has the characteristics of a Nominal scale Eg. Income (Above average, Average, low) 3. Interval scale – It has all the characteristics of an ordinal scale Eg: Temperature 4. Ratio – It has all the properties of an interval scale. Eg: Height, income
  15. 15. Constructing Hypothesis The Definition of a Hypothesis,      The second important consideration in the formulation of a research problem is the construction of hypothesis. Hypotheses bring clarity, specify and focus to research problem, but are not essential for a study Hypotheses are based on guesses. As researcher, it is not required to know about the phenomena , a situation , the prevalence of a condition in a population or about the outcome of a program , but has to have a hunch to form the basis of certain assumptions or guesses. The verification process should be don based on the information and the verification process will provide one of the three outcomes below, 1.right 2.partially right 3.worng Without this process o verification, the researcher cannot conclude anything about the validity of the assumption. Hence Hypotheses is a hunch, assumption, suspicion, assertion or an idea about a phenomenon, relationship or situation , the reality or truth of which do not know .
  16. 16. The functions of Hypothesis     The formulation of hypotheses provided a study with focus. It tells what specific aspects of a research problem to investigate A hypotheses tells what data to collect and what not to collect, thereby providing focus to the study As it provides a focus, the construction of a hypothesis enhances objectivity in a study A hypothesis may enable to add to the formulation of theory. It enables to specifically conclude what is true or what is false.
  17. 17. The characteristics of a hypothesis  A hypothesis should be simple , specific and conceptually clear Eg: The average age o the male students in this class is higher than that of the female students    A Hypotheses should be capable of verifications – Methods and techniques must be available for data collection and analysis A hypotheses should be related to the existing body of knowledge – It is important that hypotheses emerges from the existing body of knowledge A Hypotheses should be Operationalisble – This mans that it can be expressed in terms that can be measured.
  18. 18. Types of Hypothesis  Broadly there are two categories of hypothesis 1.Research hypotheses 2.Alternativc Hypotheses – The formulation of an alternative hypotheses is a convention in scientific circles. Its main function is to explicitly specify the relationship that will be considered as true in case the research hypotheses proves to be wrong. – Alternative hypothesis is the opposite of the research hypotheses – A null hypothesis or hypotheses of no difference is formulated as an alternative hypothesis.
  19. 19.     Null Hypothesis – When constructing hypothesis stipulating that there is no difference between two situations, groups outcomes or the prevalence of condition or phenomenon . Hypothesis difference – A hypothesis in which researcher stipulates that there will be difference but does not specify it magnitude. Hypotheses of point –prevalence – Outcomes is known in quantitative units eg. The proportion of mal & female smokers is 60 and 30 per cent respectively. Hypothesis of association – The relationship between the impact of different combinations on the dependent variable or relationship between the prevalence of a phenomenon among different populations .
  20. 20. Errors in testing a hypothesis          Incorrect conclusions about the validity of a hypothesis may be drawn if, The study design selected is faulty The sampling procedure adopted is faulty The method of data collection is inaccurate The analysis is wrong The statistical procedures applied are inappropriate The conclusions drawn are incorrect Rejection of a null hypotheses when it is true. This is know as a Type I error. Acceptance of a null hypothesis when it is false. This is know s a Type II error.
  21. 21. The Research Design The Definition of a Research Design A Research design is a procedural plan that is adopted by the researcher to answer questions validity, objectively, accurately and economically.
  22. 22. Functions of a Research Design  Conceptualize an operational plan to undertake the various procedures and tasks required to complete the study  Ensure that procedures are adequate to obtain valid, objective and accurate answers to the research questions.
  23. 23. Selecting a method of data collection  There are two major approaches to gather information about a situation, person, problem or phenomenon. 1. Secondary data – Information gathered using the first approach is said to be collected from secondary sources 2. Primary data - the sources used in the second approach
  24. 24. Method of data collection Method of data collection Secondary sources Documents -Govmt publications -Earlier research -Census -Personal records -Client histories -Service records Primary sources Observation Participant NonParticipant Interviewing Structured Questionnaire Mailed Questionnaire Unstructured Collective Questionnaire
  25. 25. Collecting data using attitudinal scales Functions of attitudinal scale  Respondents usually have different attitudes towards different aspects.  If the researcher want to find out the attitude of respondents towards an issue, either a closed ended or an open ended question can be asked. Difficulties in developing an attitudinal scale 1.Wic aspects of a situation or issue should be included when seeking to measure an attitude? Eg. What is your attitude towards you’re your lecturer?, In this what aspects of teaching should be included in a scale to find out the attitude of students towards their lecturer? 2. What procedure should be adopted for combining the different aspects to obtain an overall picture ? 3.How can one ensure that a scale rally is measuring what it is supposed to measure?
  26. 26. Types of Attitudinal scale There are three major types of attitudinal scale, 1. The summated rating scale, also know as the Linkert scale 2. The equal appearing interval scale or different scale, also know as the Thurstone scale 3. The cumulative scale , also know as the Guttman scale
  27. 27. Establishing the validity and reliability of a research instrument There are two perspectives on validity: 1. 2. Is the research investigations providing answers to the research questions for which it was undertaken? If so, is it providing these answers using appropriate methods and procedures? The concept of Validity It is important to remember that the concept of validity is pertinent only to a particular instrument and it is an ideal state that you as a researcher to achieve.
  28. 28. Types of Validity There are three types of validity 1. 2. 3. Face and content validity – Each question or item on the scale must have a logical link with an objective. Establishment of this link is called “Face validity” Concurrent and predictive validity – Predictive validity is judged by the degree to which an instrument can forecast an outcome. Concurrent validity is judged by Howell an instrument compares with a second assessment concurrently done. Construct validity – Construct validity is a more sophisticated technique for establishing the validity of n instrument. It is I based upon statistical procedures. It is determined by ascertaining the contribution of each construct to the total variance observed in a phenomenon.
  29. 29. The Concept of Reliability Reliability is the degree of accuracy or precision in the measurement made by a research instrument. The concept of reliability in relation to research can be looked at from two sides, 1. 2.   How reliable is an instrument ? How unreliable is it ? The first question focus on the ability of an instrument to produce consistent measurement The second question focuses on the degree of inconsistency in the measurement made by an instrument.
  30. 30. Factors affecting the reliability of a research instrument  The wording of question  The physical setting  The respondent’s mood  The nature of interaction  The regression effect of an instrument Method of determining the reliability of an instrument 1. External consistency procedures 2. Internal consistency procedures
  31. 31. Thank You