This document discusses key concepts related to variables in scientific research including:
- Independent, dependent, intervening/mediating, and moderating variables. It provides examples of how these variables relate in studies of how stress impacts employee mental state and how promotions impact employee motivation.
- It also discusses controlled variables, extraneous variables, hypotheses, conceptual frameworks, and the overall research process.
- The roles of the hypothesis in guiding research and what constitutes a good hypothesis are explained. Descriptive and relational hypotheses are defined.
- The four levels of scale measurement - nominal, ordinal, interval, and ratio - are outlined and examples are provided. Data analysis approaches for each level are also summarized.
- Key aspects
Grateful 7 speech thanking everyone that has helped.pdf
research.pptx
1. Variable
• A phenomenon which is subject to change
• Variables represents the measurable traits that can
change over the course of a scientific experiment
• Independent variable
• Dependent variable
• Intervening/ Mediating variable
• Moderating variable
• Extraneous
• Controlled variables
2. Independent Variable (IV)
• Variable that is presumed to affect the other variable
• It is a presumed cause.
How stress impacts the mental state of employees
IV = ?
Promotion effects Employee Motivation
IV = ?
3. Dependent Variable (DV)
• Variable effected by independent variable.
• It responds to the independent variable
• IV is a presumed cause, whereas DV is a presumed effect.
How stress impacts the mental state of employees
IV = Stress
DV=?
You directly miniplate stress level in human objects (employees) and measure that how
stress level changes the mental state.
Promotion effects Employee Motivation
IV = Promotion
DV =
lets explore other research papers…
4.
5. Intervening/mediating Variable
• In communication research, a mediating
variable is a variable that links the
independent and the dependent variables,
and whose existence explains the relationship
between the other two variables. A mediating
variable is also known as a mediator variable
or an intervening variable.
• An intervening variable is a hypothetical variable
used to explain causal links between other
variables. Intervening variables cannot be observed
in an experiment (that’s why they are
hypothetical).
• It is caused by IV, and it itself is a cause of the DV
• Higher education Leaves to Higher income?
• IV=?, DV=? Mediator=?
• (occupation)?
6. Moderating Variable
• A moderator variable, commonly denoted as just M, is a third
variable that affects the strength of the relationship between
a dependent and independent variable
• In correlation, a moderator is a third variable that affects the
correlation of two variables.
7. Controlled Variable
• It is a variable that is not allowed to be changed unpredictably during
an experiment
• As they are expected to remain the same they are also called constant
variable.
8. Extraneous variable
• Extraneous variables are undesirable variables that influence the
relationships between variables under study
9. Hypothesis
• A tentative theory that has not yet been tested.
• Hypotheses are propositions which are empirically
testable. They are usually concerned with the
relationships between variables
• Example: Increasing salary by 10% will double the
production
• If basic needs are not met, then motivation level
among the employees will be low
10. The Role of
the
Hypothesis
• Guides the direction of the study
• Identifies facts that are relevant
• Suggests which form of research design is
appropriate
• Provides a framework for organizing the
conclusions that result
11. What is a Good
Hypothesis?
• A good hypothesis should fulfill three
conditions:
• Must be adequate for its purpose
• Must be testable
• Must be better than its rivals
12. Types of
Hypotheses
• Descriptive Hypotheses:
• These describe properties
• Example:
• Current turnover in UAE telecom
industry is greater than 15 per cent
per annum
13. Types of
Hypotheses
• Relational Hypotheses:
• These describe relationship between
two variables
• Example:
• CEOs with higher education spend
more on training and development
of their employees
• The greater the employees welfare
measures provided by the
management of a company, the
smaller the labour turnover of
skilled workers
14. Conceptual
framework
• A conceptual framework is used in research to
outline possible courses of action or to present a
preferred approach to an idea or thought.
• Conceptual frameworks act like maps that give
coherence to empirical inquiry.
16. Defining a Researchable Problem
• basis for academic research. Without a well-defined research
problem, you are likely to end up with an unfocused and
unmanageable project.
• You might end up repeating what other people have already said,
trying to say too much, or doing research without a clear purpose and
justification. You need a problem in order to do research that
contributes new and relevant insights.
• Whether you’re planning your thesis, starting a research paper or
writing a research proposal, the research problem is the first step
towards knowing exactly what you’ll do and why.
16
17. THE PROBLEM STATEMENT
You will lead into the problem on the basis of your discussion of the literature. Think in terms of these
questions:
What research gap is your work intended to fill?
What limitations in previous work does it address?
What contribution to knowledge does it make?
Example: In the literature on the gig economy, these new forms of employment
are sometimes characterized as a flexible active choice and sometimes as an
exploitative last resort. To gain a fuller understanding of why young people
engage in the gig economy, in-depth qualitative research is required. Focusing on
workers’ experiences can help develop more robust theories of flexibility and
precarity in contemporary employment, as well as potentially informing future
policy objectives.
17
18. THE PROBLEM STATEMENT (Cont’d)
•Finally, the problem statement should frame how you intend to
address the problem. Your goal should not be to find a conclusive
solution, but to seek out the reasons behind the problem and propose
more effective approaches to tackling or understanding it.
•The aim is the overall purpose of your research. It is generally written
in the infinitive form:
The aim of this study is to determine…
This project aims to explore…
I aim to investigate…
18
19. 4. DEVELOPING RESEARCH QUESTION/HYPOTHESIS
•You must have an explicit research question or hypothesis.
•This means stating a question that the research would answer. It must be a
question that can be answered by empirical data.
• If you provide a hypothesis, it must be possible to demonstrate whether the
hypothesis is false by reference to empirical evidence.
• Present your research question clearly and directly, with a minimum of
discussion at this point. The rest of the paper will be taken up with discussing
and investigating this question and hypothesis; here you just need to express it.
•WHAT MAKES A GOOD RESEARCH QUESTION?
Focused
Empirical
Clear
Based on prior research or theory
Important to answer
Does not use “should”
Has intuitive appeal
19
20. Levels of Scale Measurement
1. Nominal
• Most elementary level of measurement.
• Assigns a value to an object for identification or classification purposes.
• Nominal scale is a naming scale, where variables are simply “named” or labeled, with
no specific order.
• Calculations done on these numbers will be basic as they have no quantitative
significance.
• Nominal scale is often used in research surveys and questionnaires where only
variable labels hold significance.
Which mobile phone do you use?
• Apple
• Samsung
• other
21. Levels of Scale Measurement Cont’d
Nominal Scale Data and Analysis
• There are two primary ways in which nominal scale data can be collected:
a. By asking an open-ended question, the answers of which can be coded to a respective
number of label decided by the researcher.
b. The other alternative to collect nominal data is to include a multiple-choice question in
which the answers will be labeled.
• Mostly, the analysis of gathered data will happen using percentages or mode,i.e., the
most common answer received for the question.
• The data collected later can also be compared within different groups.
22. Levels of Scale Measurement Cont’d
2. Ordinal
• Have nominal properties as well as the following.
• Ordinal scale is the 2nd level of measurement that reports the ranking and ordering of the data
without actually establishing the degree of variation between them.
• “Ordinal” indicates “order”. Ordinal data is quantitative data which have naturally occurring
orders and the difference between is unknown. It can be named, grouped and also ranked.
• Survey respondents will choose between these options of satisfaction but the answer to “how
much?” will remain unanswered.
• Here, the order of variables is of prime importance and so is the
labeling. Very unsatisfied will always be worse than unsatisfied and
satisfied will be worse than very satisfied.
• This is where ordinal scale is a step above nominal scale – the order
is relevant to the results and so is their naming.
• Analyzing results based on the order along with the name becomes
a convenient process for the researcher.
• If they intend to obtain more information than what they would
collect using a nominal scale, they can use the ordinal scale.
23. Levels of Scale Measurement Cont’d
Ordinal Data and Analysis
• Ordinal scale data can be presented in tabular or graphical formats for a
researcher to conduct a convenient analysis of collected data.
• Also, some statistical tests can also be used to analyze ordinal data.
• These statistical tests are generally implemented to compare two or more
ordinal groups to conclude which variable of one group is bigger or smaller than
another . Also, researchers can analyze whether two or more ordinal groups
have the same median or not.
24. Levels of Scale Measurement Cont’d
3. Interval
• Have both nominal and ordinal properties as well as the following.
• Interval Scale is defined as a numerical scale where the order of the variables is known
as well as the difference between these variables.
• Measurement: Interval data is measured using an interval scale, which not only shows the order and
direction but also shows the exact difference in the value. For example, the markings on a
thermometer or a ruler are equidistant, in simpler words they measure the same distance between the
two markings.
• Interval Difference: The distances between each value on interval data is equal. For example, the
difference between 10 cm and 20 cms is the same as 20 cms and 30 cms.
• Calculation: In interval data, one can add or subtract values but cannot divide or multiply. Almost all
statistical analysis are applicable when calculating interval data, mean, mode, median etc.
• Point Zero: Absolute zero point is arbitrary, which means a variable can be measured even if it has a
negative value like temperature can be -10 below zero but height cannot be below zero.
25. Levels of Scale Measurement Cont’d
Examples:
• lickert or MCQs or any other form to collect the following type of data having arbitrary ZERO
value
• One can measure time during the day using a 12-hour clock, this is a good example of interval
data. Time in a 12-hour format is a rotational measure that keeps restarting from zero at set
periodicity. These numbers are on an interval scale as the distance between them is measurable
and comparable. For example, the difference between 5 minutes and 10 minutes is the same as
15 minutes and 20 minutes in a 12-hour clock.
• The temperature measured in Fahrenheit and Celsius but not in Kelvin. If you measure
temperature in Fahrenheit and Celsius then it will be considered interval data as 0 is arbitrary.
But in Kelvin, 0 is absolute. There can’t be a temperature below zero degrees in Kelvin.
• When you calculate intelligence score in an IQ test. There is no zero point for IQ. According to
psychological studies, a person cannot have zero intelligence, therefore in this example, zero is
arbitrary. IQ is numeric data expressed in intervals using a fixed measurement scale.
• Test scores of examination like SAT. Scores in SAT test are in the range of 200-800. The numbers
from 0 to 200 are not used when they scale the raw score (number of questions answered
correctly) to the section score. The reference point is not an absolute zero, thus, it qualifies to
become interval data.
• Age is also a variable that can be measured on an interval scale. For example if A is 15 years old
and B is 20 years old, it not only clear than B is older than A, but B is elder to A by 5 years.
• Interval data is one of the most used data types. Survey tools offer several ways to capture
interval data. When a survey is deployed to a respondent, with a certain demographic
question that asks respondents to state their income, these figures can range from zero to
infinity!
26. Levels of Scale Measurement Cont’d
Interval Data and Analysis
• All the techniques applicable to nominal and ordinal data analysis are applicable
to Interval Data as well. Apart from those techniques, there are a few analysis
methods such as descriptive statistics, correlation regression analysis which is
extensively for analyzing interval data.
• Descriptive statistics is the term given to the analysis of numerical data which
helps to describe, depict, or summarize data in a meaningful manner and it helps
in calculation of mean, median, and mode.
27. Levels of Scale Measurement Cont’d
4. Ratio
• Highest form of measurement.
• Have all the properties of interval scales with the additional
attribute of representing absolute quantities.
• Ratio Scale is defined as a variable measurement scale that not
only produces the order of variables but also makes the
difference between variables known along with information on
the value of true zero. (being treated as a point of origin).
• Because of the existence of true zero value, the ratio scale
doesn’t have negative values.
• The best examples of ratio scales are weight and height. In
market research, a ratio scale is used to calculate market share,
annual sales, the price of an upcoming product, the number of
consumers, etc.
• Can use lickert or MCQs or any other form to collect the
following type of data having absolute ZERO value
28. Levels of Scale Measurement Cont’d
Ratio Data and Analysis
• With the option of true zero, in addition to all the analysis of interval data, varied inferential, and
descriptive analysis techniques can be applied to the variables.
• We can calculate ratios based on this data because $0 is the absolute minimum amount of money a person
could have with them.
• Ratio data can be multiplied and divided, and this is one of the significant differences between ratio data
and interval data, which can only be added and subtracted. In ratio data, the difference between 1 and 2 is
the same as the difference between 3 and 4, but also here 4 is twice as much as 2. This comparison is
impossible in interval data.
30. The Three Criteria for Good Measurement
Sensitivity
Reliability Validity
Good
Measurement
31. 1. Reliability
• Reliability
• The degree to which the measure of a construct is consistent or dependable.
• In other words, if we use this scale to measure the same construct multiple times, do
we get pretty much the same result every time, assuming the underlying
phenomenon is not changing?
• You measure the temperature of a liquid sample several times under identical conditions. The
thermometer displays the same temperature every time, so the results are reliable.
• A doctor uses a symptom questionnaire to diagnose a patient with a long-term medical condition.
Several different doctors use the same questionnaire with the same patient but give different
diagnoses. This indicates that the questionnaire has low reliability as a measure of the condition.
• Note that reliability implies consistency but not accuracy.
32. Estimating Reliability
TYPE OF RELIABILITY WHAT DOES IT ASSESS? EXAMPLE
Test-retest The consistency of a measure across
time: do you get the same results
when you repeat the measurement?
A group of participants complete a questionnaire designed
to measure personality traits. If they repeat the
questionnaire days, weeks or months apart and give the
same answers, this indicates high test-retest reliability.
Interrater The consistency of a measure across
raters or observers: do you get the
same results when different people
conduct the same measurement?
Based on an assessment criteria checklist, five examiners
submit substantially different results for the same student
project. This indicates that the assessment checklist has low
inter-rater reliability (for example, because the criteria are
too subjective).
Internal consistency The consistency of the measurement
itself: do you get the same results
from different parts of a test that are
designed to measure the same thing?
You design a questionnaire to measure self-esteem. If you
randomly split the results into two halves, there should be
a strong correlation between the two sets of results. If the
two results are very different, this indicates low internal
consistency.
33. 2. Validity
• Validity refers to how accurately a method measures what it is intended to measure. If research
has high validity, that means it produces results that correspond to real properties,
characteristics, and variations in the physical or social world.
• It checks the accuracy of a measure or the extent to which a score truthfully represents a
concept.
• Does a scale measure what was intended to be measured?
• Example: The thermometer that you used to test the sample gives reliable results. However, the
thermometer has not been calibrated properly, so the result is 2 degrees lower than the true value.
Therefore, the measurement is not valid.
• High reliability is one indicator that a measurement is valid. If a method is not reliable, it
probably isn’t workable.
• Establishing Validity:
• Is there a consensus that the scale measures what it is supposed to measure?
• Does the measure correlate with other measures of the same concept?
• Does the behavior expected from the measure predict actual observed behavior?
34. Estimating Validity
Type of validity What does it assess? Example
Construct The adherence of a measure to existing theory
and knowledge of the concept being measured.
A self-esteem questionnaire could be assessed by measuring
other traits known or assumed to be related to the concept of
self-esteem (such as social skills and optimism). Strong
correlation between the scores for self-esteem and associated
traits would indicate high construct validity.
Content The extent to which the measurement covers all
aspects of the concept being measured.
A test that aims to measure a class of students’ level of Spanish
contains reading, writing and speaking components, but no
listening component. Experts agree that listening comprehension
is an essential aspect of language ability, so the test lacks content
validity for measuring the overall level of ability in Spanish.
Criterion The extent to which the result of a measure
corresponds to other valid measures of the
same concept.
A survey is conducted to measure the political opinions of voters
in a region. If the results accurately predict the later outcome of
an election in that region, this indicates that the survey has high
criterion validity.
35. Estimating Validity (cont’d)
• Face Validity
• A scale’s content logically appears to reflect what was intended to be measured. Face
validity refers to whether an indicator seems to be a reasonable measure of its
underlying construct “on its face”.
• For instance, the frequency of one’s attendance at religious services seems to make
sense as an indication of a person’s religiosity without a lot of explanation. Hence
this indicator has face validity.
36. What is a Literature Review?
• According to Creswell (2005), a review of the literature “is a
written summary of journal articles, books and other
documents that describes the past and current state of
information, organizes the literature into topics and
documents a need for a proposed study.” (pp. 79)
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating
Quantitative and Qualitative Research
37. What is a Literature Review? Cont’d
• A literature review
• surveys scholarly articles, books and other sources
• (e.g. dissertations, conference proceedings) relevant to a particular issue, area of
research, or theory.
• provides a short description and critical evaluation of work critical to the topic.
• offers an overview of significant literature published on a topic.
• (Lyons, 2005)
38. What is a Literature Review? Cont’d
• Well-written analytical narrative that brings a reader up-to-
date on what is known on a given topic, but also provide fresh
insights that advance knowledge
• Resolve conflicts between studies
• Identify new ways to interpret research results
• Creating a path for future research
39. What is
not LR
The LR is a summary of research: BUT
it is not a “list” of found research
The LR should be organized:
• A coherent and articulate account of past and
current research findings
• The reviewer is a guide and should be able to
provide readers with an in-depth and current
status of research in a given area.
Suggestion: read 2 or 3 LRs in order to
become familiar with summary styles
40. What should LR do?
• The LR should document the need for a proposed study:
• Studies should not duplicate research that has been already
done.
• Even in cases when research is duplicated (replicated is the
appropriate term), one is responsible for documenting the
need for replication, e.g., need to explore the same
methodology with a different group or population, or need to
change methodology with the same group.
41. What is Preliminary Literature Review?
• This succinct review of current literature should:
• Provide further contextual background
• Reveal issues related to your study
• Describe similar problems in other organizations
• Provide significance to your approach to the study
42. Reasons for Conducting Literature Reviews
• For a review paper
• For the introduction (and discussion) of a research paper,
masters thesis or dissertation
• To embark on a new area of research
• For a research proposal
• (Burge, 2005)
43. Conducting a literature review will help you:
• Determine if proposed research is actually needed.
• Even if similar research published, researchers might
• suggest a need for similar studies or replication.
• Narrow down a problem.
• It can be overwhelming getting into the literature of a field of study.
A literature review can help you understand where you need to focus
your efforts.
• Generate hypotheses or questions for further studies.
(Mauch & Birch, 2003)
44. Conducting a literature review will give you:
• Background knowledge of the field of inquiry
• Facts
• Eminent scholars
• Parameters of the field
• The most important ideas, theories, questions and hypotheses.
• Knowledge of the methodologies common to the
• field and a feeling for their usefulness and
• appropriateness in various settings.
(Mauch & Birch, 2003)
45. 10 Steps to a comprehensive Literature
review
1. Do I have clearly defined research aims prior to commencing the review?
2. Have I correctly identified all the sources that will help me define my problem statement or research
question?
3. Have I considered all kinds of literature – including both qualitative and quantitative research articles?
4. Do I have enough empirical or theoretical evidence to support my hypothesis?
5. Have I identified all the major inconsistencies or other shortcomings related to my research topic?
6. Have I gathered sufficient evidence from the literature about the accuracy and validity of the designs or
methods that I plan to use in my experiments?
7. Have I identified the purpose for which articles have been shortlisted for literature review?
8. Is my relationship diagram ready?
9. Have I recorded all the bibliographic information regarding my information sources?
10. Will my literature review reflect a report that is created after a through critical analysis of the literature?
Preliminary LR
46. Organising your material: Identifying a debate
Scholar X Scholar Y
• disagrees with
• agrees with (school
of thought?)
• builds on the conclusions of
• confirms the findings of
• has reservations about
47. Thinking critically
• When identifying the key ideas, themes and methodologies in
your field, it is important to think critically about them
• This will allow you to identify a ‘gap’ in the literature
Ask yourself:
• What are the strengths and weaknesses of these debates?
• What evidence is lacking, inconclusive or limited?
• What will you add to the topic? What will you do differently?
48. Creswell’s 5 stages to Conduct a Literature Review
• Stage 1: Identify Key Terms or “Descriptors”
• Extract key words from your title (remember, you may
decide to change the title later)
• Use some of the words other authors reported in the
literature
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research
49. Creswell’s 5 stages to Conduct a Literature Review
• Stage 2: Locate Literature
• Use academic libraries, do not limit your search to an electronic search of
articles
• Use all sources: primary, and secondary sources. A “primary source” is
research reported by the researcher that conducted the study. A “secondary
source” is research that summarizes or reports findings that come from
primary sources
• It is “best to report mostly primary sources” (p. 82)
• Search different types of literature: summaries, encyclopedias, dictionaries and
glossaries of terms, handbooks, statistical indexes, reviews and syntheses,
books, journals, indexed publications, electronic sources, abstract series, and
databases
• Sage, Emerald
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research
50. Creswell’s 5 stages to Conduct a Literature Review
• Stage 2: Critically Evaluate and Select Literature Cont’d
• Classification # 1
• Everyday knowledge: Newspapers, weekly magazines, …
• Professional Knowledge
• Scientific Knowledge
• Classification # 2
• Primary publications: Scientific journals, books, theses/dissertations
and internal reports
• Secondary publications: handbooks, bibliographies, review articles
• Tertiary Publications: Summaries of handbooks, bibliographies and reviews
51. Creswell’s 5 stages to Conduct a Literature Review
• Stage 2: Critically Evaluate and Select Literature Cont’d
• How many articles?
• The LR should be exhaustive and as current as possible.
• How many articles?
• There is no set number. As long as the search is exhaustive and focused
on the research topic, the review will be acceptable.
• How far back should one search?
• A reasonable and widely accepted timeframe includes research conducted during
the past 10 years.
• Important studies (i.e., studies that had a significant impact on the field of study)
should also be mentioned even if these go beyond the mentioned timeframe.
52. Creswell’s 5 stages to Conduct a Literature Review
• Stage 3: Critically Evaluate and Select Literature
• Rely on journal articles published in national journals
• Prioritize your search: first look for refereed journal articles, then,
non-refereed articles, then books, then conference papers,
dissertations and theses and then papers posted to websites
• Look for research articles and avoid as much as possible “opinion”
pieces
• Blend qualitative and quantitative research in your review
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research
53. Creswell’s 5 stages to Conduct a Literature Review
• Stage 4: Organize the Literature
• Create a “file” or “abstract” system to keep track of what you read. Each article you read
should be summarized in one page containing
• Title (use APA to type the title so that you can later copy-paste this into the References
section of your paper)
• Source: journal article, book, glossary, etc.
• Research problem: one or two lines will suffice
• Research Questions or Hypotheses
• Data collection procedure (a description of sample characteristics can be very handy
as well)
• Results or findings of the study
• Sort these abstracts into groups of related topics or areas which can then become the
different sections of your review
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research
54. Creswell’s 5 stages to Conduct a Literature Review
• Stage 5: Write a Literature Review Cont’d
Read the Material Closer(Carroll, 2006)
• read the abstract
• Decide whether to read the article in detail
• read introduction
• It explains why the study is important
• It provides review and evaluation of relevant literature
• read Method with a close, critical eye
• Focus on participants, measures, procedures
• Evaluate results
• Do the conclusions seem logical
• Can you detect any bias on the part of the researcher?
• Take discussion with a grain of salt (suspicious)
• Edges are smoothed out
• Pay attention to limitations
55. Creswell’s 5 stages to Conduct a Literature Review
• Stage 5: Write a Literature Review Cont’d
• Summarize individual studies or articles
• Use as much or as little detail as each merits according to its
comparative importance in the literature
• Length denotes significance.
• Don’t need to provide a lot of detail about the procedures
used in other studies.
• Most literature reviews only describe the main findings,
relevant methodological issues, and/or major conclusions of
other research.
56. Creswell’s 5 stages to Conduct a Literature Review
• Stage 5: Write a Literature Review Cont’d
• How ill you organise the information?
• Chronologically?
• Thematically?
• By trends/approaches/techniques?
• Major debates/controversies?
• Probably a combination of these
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research
57. Creswell’s 5 stages to Conduct a Literature Review
• Stage 5: Write a Literature Review Cont’d
• Paragraphs and flow
• Paragraph:
• Topic sentence
• Discussion of topic
• Closing sentence
• Thematic and grammatical links
• Logical progression from one paragraph to the next
• Demonstrate links in your language
Creswell, J.W. (2005) Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative
Research