Upcoming SlideShare
×

# Kinds Of Variables Kato Begum

7,455 views

Published on

Published in: Technology
8 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
7,455
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
348
0
Likes
8
Embeds 0
No embeds

No notes for slide

### Kinds Of Variables Kato Begum

1. 1. Kinds of variable: <ul><li>The independent variable: </li></ul><ul><li>It is the factor that is measured, manipulated or selected by the experimenter to determine its relationship to an observed phenomenon. It is a stimulus variable or input operates within a person or within his environment to effect behavior. Independent variable may be called factor and its variation is called levels. </li></ul><ul><li>The dependent variable: </li></ul><ul><li>The dependent variable is a response variable or output. The dependent variable is the factor that is observed and measured to determine the effect of the independent variable; it is the factor that appears, disappears, or varies as the researcher introduces, removes, or varies the independent variables. </li></ul>
2. 2. <ul><li>Moderate variable: </li></ul><ul><li>It is the factor that is measured, manipulated or selected by the experimenter to discover whether it modifies the relationship of the independent variable to an observed phenomenon. The term moderate variable describes a special type of independent variable, a secondary independent variable selected to determine if it affects the relationship between the study’s primary independent variable and its dependent variable. </li></ul><ul><li>Control variable: </li></ul><ul><li>Control variables are factors controlled by the experimenter to cancel out or neutralized any effect they might otherwise on the observed phenomena. A single study can not examine all of the variables in a situation (situational variable) or in a person (dispositional variable); some must be neutralized to guarantee that they will not exert differential or moderating effects on the relationship between the independent variables and dependent variables. </li></ul><ul><li>Intervening variable: </li></ul><ul><li>An intervening variable is the factor that theoretically effects observed phenomena but can not be seen, measured, or manipulated; its effects must be inferred from the effects of the independent and moderate variable on the observed phenomena. </li></ul>
3. 3. Consider the hypothesis <ul><li>Among students of the same age and intelligence, skill performance is directly related to the number of practice trials, the relationship being particularly strong among boys, but also holding, though less directly, among girls’. this hypothesis that indicates that practice increases learning, involve several variables. </li></ul><ul><li>Independent variable : number of practice trail </li></ul><ul><li>Dependent variable : skill performance </li></ul><ul><li>Control variable : age, intelligence </li></ul><ul><li>Moderate variable : gender </li></ul><ul><li>Intervening variable : learning </li></ul>
4. 4. Steps in data processing
5. 5. Quantitative Analysis Strategies There are two types of quantitative analysis: Descriptive : Utilizes numerical and graphical methods to find patterns in a data set, summarizes the information, and present information in a convenient form. Inferential : Utilizes a sample to make estimates, decisions, or predictions about population . It consists of Estimation technique and Hypothesis of testing.
6. 6. Variable, Data & types of data <ul><li>Variable: </li></ul><ul><ul><li>Characteristic or property of an individual population unit </li></ul></ul><ul><ul><li>The value of the characteristic may vary among units in a population. </li></ul></ul><ul><ul><li>Kinds of variable: </li></ul></ul><ul><ul><li>The independent variable: </li></ul></ul><ul><ul><li>The dependent variable: </li></ul></ul><ul><ul><li>Moderate variable: </li></ul></ul><ul><ul><li>Control variable: </li></ul></ul><ul><ul><li>Intervening variable: </li></ul></ul><ul><li>Data: </li></ul><ul><ul><li>The values of the observations recorded for variables or a bunch of values for one or more variables. </li></ul></ul><ul><li>Types of Data: </li></ul><ul><ul><li>Quantitative or Qualitative </li></ul></ul><ul><li>Quantitative Data (Measurement): </li></ul><ul><ul><li>Data that are measured on a naturally occurring numerical scale. </li></ul></ul><ul><li>Qualitative Data (Categorical): </li></ul><ul><ul><li>Data that cannot be measured on a naturally occurring numerical scale. </li></ul></ul><ul><ul><li>Can only be classified into a group of categories (classes). </li></ul></ul>
7. 7. Examples of qualitative and quantitative data <ul><li>Examples of quantitative data: Temperature, Height, Weight, Age, Student score, Total students in the school etc. </li></ul><ul><li>Examples of qualitative data: Sex, Grades (A, B, C, D or E), Competency in English (Full, moderate, little, not at all) etc. </li></ul><ul><li>Identify qualitative or quantitative from the following: </li></ul><ul><li>Type of institution (public or private), System of education (Pakistani, British or American), Medium of instruction (English, Urdu, Other), Importance of communication skills (not at all, a little, quite, very, most), number of teachers in the institution, Years of schooling </li></ul>
8. 8. More on Qualitative and Quantitative Data <ul><li>Qualitative data may be: </li></ul><ul><li>Nominal Data </li></ul><ul><ul><li>Categories cannot be ranked. </li></ul></ul><ul><li>Ordinal Data </li></ul><ul><ul><li>Categories can be ranked or meaningfully ordered. </li></ul></ul><ul><li>Quantitative data may be: </li></ul><ul><li>Interval Data </li></ul><ul><ul><li>Differences between values have meaning, but ratios between values have none. </li></ul></ul><ul><ul><li>Zero is arbitrary. </li></ul></ul><ul><ul><li>Can add/subtract but cannot multiply/divide. </li></ul></ul><ul><li>Ratio Data </li></ul><ul><ul><li>Ratios between values have meaning. </li></ul></ul><ul><ul><li>Zero is the absence of the characteristic being measured. </li></ul></ul><ul><ul><li>Can add/subtract/multiply/divide. </li></ul></ul>
9. 9. Summary of Data Classification Increasing Complexity
10. 10. Presentation of Data <ul><li>Statistical data are generally presented by: </li></ul><ul><li>Tables </li></ul><ul><li>- Frequency table: </li></ul><ul><li>- Cross tabulation </li></ul><ul><li>Graphs </li></ul><ul><li>- For Qualitative data </li></ul><ul><li>- For Quantitative data </li></ul>
11. 11. Frequency table and cross tabulation What is a Frequency table? It is a tabular summary of a set of data showing the frequency (or number) of items in each of several non-overlapping (with each data value belonging to one and only one group or class) groups or classes. Note: In qualitative data , class is one of the category of the variable and in quantitative data it is the range of values established to divide the data into categories. What is Cross tabulation? It is the tabular summary of a set of data when two or more variables are observed at the same time.
12. 12. Central Tendency & its types <ul><li>Value(s) that define the tendency of the data to cluster around or center about certain numerical values. </li></ul><ul><li>Main types of central tendency are: </li></ul><ul><ul><li>Mean (arithmetic mean), </li></ul></ul><ul><ul><li>Median, and </li></ul></ul><ul><ul><li>Mode </li></ul></ul>
13. 13. Mean (or Arithmetic Mean) <ul><li>Sum of the values of all the observations in a data set divided by the total number of observations. Mathematically: </li></ul><ul><ul><li>The Sample Mean ( ) = = </li></ul></ul><ul><ul><li>The Population Mean ( ) = = </li></ul></ul>
14. 14. Median <ul><li>The middle point of the set of data, i.e. exactly half of the data points are above the median and exactly half are below. </li></ul><ul><ul><li>If the number of observations are odd, it is the middle point of the ordered set of data. </li></ul></ul><ul><ul><li>Median = observation </li></ul></ul><ul><ul><li>If the number of observations are even, it is the average (mean) of the two middle points of the ordered set of data. </li></ul></ul><ul><ul><li> Median = observations </li></ul></ul>
15. 15. Mode <ul><li>The measurement(s) which occurs with the greatest frequency in the sample, i.e. the most common point(s): </li></ul><ul><ul><li>A uni-modal data set contains only one mode. </li></ul></ul><ul><ul><li>A bimodal data set contains two modes. </li></ul></ul><ul><ul><li>And so on…. </li></ul></ul>
16. 16. Decision about data symmetry using mean and median <ul><li>If the median is less than the mean, the data set is skewed right (extreme data in right tail which increases the mean). </li></ul><ul><li>If the median is greater than the mean, the data set is skewed left (extreme data in the left tail which decreases the mean). </li></ul><ul><li>If median equals the mean, the data set is said to be symmetrical. </li></ul>
17. 17. Measures of Data Variability <ul><li>Knowing central tendencies (mean, median, mode) isn’t enough. Also need a method for determining how close the data is clustered around its center point(s). </li></ul><ul><li>The most typical measures of data variability: </li></ul><ul><ul><li>Range, </li></ul></ul><ul><ul><li>Variance, and </li></ul></ul><ul><ul><li>Standard Deviation. </li></ul></ul>
18. 18. Range <ul><li>Simplest measure of variability. </li></ul><ul><li>Calculated by subtracting the smallest measurement from the largest measurement. </li></ul><ul><li>It is not a good measure of variability. i.e. if two ranges are same, it does not mean that the spread is same. </li></ul>
19. 19. Variance <ul><li>It is the sum of the square of the deviation from the mean divided by (n-1) for a sample and is denoted by s 2 . </li></ul><ul><li>Similarly, the sum of the square of the deviation from the mean divided by N for the population and is denoted by  2 . </li></ul><ul><li>Note: Deviations are squared to remove effects of negative differences. </li></ul>
20. 20. Standard Deviation <ul><li>While variance does not provide a useful metric (i.e. “units squared”), taking the positive square root of the variance provides a metric which is the same as the data itself (i.e. “units”). </li></ul><ul><ul><li>Sample Standard Deviation - s </li></ul></ul><ul><ul><li>Population Standard Deviation -  </li></ul></ul>
21. 21. Application of mean & standard deviation to observe the behavior of the data <ul><li>Data can be standardized using mean & standard deviation. Thus, for a single data set, variability can be discussed in terms of how many members of the data set fall within one, two, three, or more standard deviations of the mean. </li></ul>
22. 22. Standard Score <ul><li>It uses a common scale to indicate how an individual compare to other individual in group. These scores are particularly helpful in comparing an individual’s relative position. The two standards score are the most frequently used in educational research, </li></ul><ul><li>1. 1 Z – Score </li></ul><ul><li>2. T- Score </li></ul><ul><li>1. Z – Score </li></ul><ul><li>The simplest form of standard score is the 1. Z – Score. It expresses how far a raw score is from the mean in standard deviation units. A big advantage of Z – Score is that they allow raw scores on different tests to be compared Z – Score </li></ul>
23. 23. Example <ul><li>a student received raw scores of 60 on a biology test and 80 on a chemistry test. A naïve observer might be inclined to infer that the student was doing better in chemistry than in biology. But this might be unwise, for how well the student is comparatively cannot be determined until we know the mean and standard deviation for each distribution of score. Let us suppose the mean is 50 in biology and 90 in chemistry. Also assume the standard deviation on biology deviation is 5 on chemistry is 10. What does this tell us? </li></ul><ul><li>Comparison of raw scores and Z scores on two tests. </li></ul><ul><li>Test Score Raw Score Mean SD Z. Score Percentile Rank </li></ul><ul><li>Bio 60 50 5 2 98 </li></ul><ul><li>Che 80 90 10 -1 16 </li></ul>
24. 24. Probability and z score <ul><li>. </li></ul><ul><li>Probability: </li></ul><ul><li>It refers to the likely hood of an event occurring and a percentage stated in decimal form. For example if there is a probability that an event will occur 25 percent of the time, this event can be said to have a probability of .25. </li></ul><ul><li>Hypothesis </li></ul><ul><li>There are two kinds of hypothesis; one is the predictive outcome of the study called research hypothesis where as the null hypothesis is the assumption that there is no relationship between the variables or in the population.. </li></ul>
25. 25. Co relational analysis <ul><li>It shows the existing relationship between the variables, with no manipulation of variables. It is also used to analyze data containing two variables as well as examine the reliability and validity of the data collection procedure. </li></ul><ul><li>Types </li></ul><ul><li>Highly positive (when the variables are directly proportional to each other) </li></ul><ul><li>Low correlation (when there is no correlation between the variables) </li></ul><ul><li>Negative correlation (when the variables are inversely proportional to each other) </li></ul>
26. 26. <ul><li>When the researcher wants to make inferences to the population, he will have to examine their statistical significance. </li></ul><ul><li>Statistical significance can be determined if correlation have been obtained from the randomly selected samples. </li></ul><ul><li>Level of significance is very important since it relates directly to whether the null hypothesis is rejected or not. </li></ul> Depends on the size of the correlation Significance of correlation Size of the sample
27. 27. There are two ways <ul><li>Multiple regressions </li></ul><ul><li>Factor analysis </li></ul><ul><li>Multiple regressions </li></ul><ul><li>Through multiple regressions it is possible to examine the relationship and predictive power of one or more independent variables with the dependent variables. it shows which variables are significant in their contribution explaining the variance in the dependent variable and how much they contribute. </li></ul><ul><li>Discriminate analysis </li></ul><ul><li>Which contribution of variables distinguishes between one or more categories of dependent variables? </li></ul>
28. 28. Factor analysis <ul><li>In it independent variable is not related to dependent variables as in regression, but rather operates within a number of independent variables without a need to have dependent variables. In factor analysis the interrelationships between and among the variables of the data are examined in an attempt to find out how many independent dimensions can be identified in the data. It thus provides information on the characteristics of the variables. This type of analysis is based on the assumption that variables measuring the same factor will be highly related. Where as variables measuring different factors will have low correlations with one another. </li></ul>
29. 29. <ul><li>T-test </li></ul><ul><li>It is used to compare the means of the two groups. </li></ul><ul><li>Types </li></ul><ul><li>T-test for independent means </li></ul><ul><li>T-test for correlate means </li></ul><ul><li>The result of t-test provides the researcher with a t-value. </li></ul><ul><li>Example </li></ul><ul><li>A researcher is comparing the performance of the two randomly selected groups learning French by two different methods. The experimental group learns with the aid of computer while the control group is exposed to the teacher. The researcher investigates the effects of the computer practice on students’ achievement on French. After three months both the groups undergo an achievement test. </li></ul><ul><li>The researcher uses t- test to examine whether there are differences in the achievements of the two groups. </li></ul><ul><li>To have a deep insight of the data through descriptive statistics, first it have a mean X,S D and sample size N of the data .there must be a mean of experimental or control group. </li></ul>
30. 30. ANOVA (one way analysis of variance) <ul><li>One way analysis of variance is used to examine the differences in more than two groups. </li></ul><ul><li>The analysis is performed on the variances of the groups, focusing on whether the variability between the groups is greater that the variability within the groups value is the ratio between variances over the within the variances. </li></ul><ul><li>F = between group variance </li></ul><ul><li>Within group variance </li></ul><ul><li>If the difference between the groups is greater than the difference within the groups, than F value is significant and the researcher can reject the null hypothesis. if the situation is inverse than F value is significant. </li></ul>
31. 31. Chi - square <ul><li>The chi test allows analysis of one, two or more nominal variables. it is based on the comparison between expected frequencies and actual, obtained frequencies. </li></ul><ul><li>Example </li></ul><ul><li>A researcher might want to compare how many male and female teachers favor a new curriculum, to be instituted in a particular school district. he asks a sample of 50 teachers ,if they favor or oppose new curriculum. if they do not differ significantly in their responses, then we would expect hat about the same proportion of males and females would be in favor(or opposed to)instituting the curriculum. </li></ul><ul><li>Degree of freedom </li></ul><ul><li>Number of scores in a distribution that are free to vary-that is, that are not fixed. </li></ul>
32. 32. Use SPSS Programme <ul><li>For Calculation </li></ul><ul><li>For Analysis </li></ul>
33. 33. The End <ul><li>Allah Hafiz </li></ul><ul><li>Thank You </li></ul>