William Allan Kritsonis - APA Corrections/Revisions
Upcoming SlideShare
Loading in...5
×
 

William Allan Kritsonis - APA Corrections/Revisions

on

  • 1,131 views

In 2008, Dr. Kritsonis was inducted into the William H. Parker Leadership Academy Hall of Honor, Graduate School, Prairie View A&M University – The Texas A&M University System. He was nominated by ...

In 2008, Dr. Kritsonis was inducted into the William H. Parker Leadership Academy Hall of Honor, Graduate School, Prairie View A&M University – The Texas A&M University System. He was nominated by doctoral and master’s degree students.

Dr. Kritsonis Lectures at the University of Oxford, Oxford, England

In 2005, Dr. Kritsonis was an Invited Visiting Lecturer at the Oxford Round Table at Oriel College in the University of Oxford, Oxford, England. His lecture was entitled the Ways of Knowing Through the Realms of Meaning.

Dr. Kritsonis Recognized as Distinguished Alumnus

In 2004, Dr. William Allan Kritsonis was recognized as the Central Washington University Alumni Association Distinguished Alumnus for the College of Education and Professional Studies. Dr. Kritsonis was nominated by alumni, former students, friends, faculty, and staff. Final selection was made by the Alumni Association Board of Directors. Recipients are CWU graduates of 20 years or more and are recognized for achievement in their professional field and have made a positive contribution to society. For the second consecutive year, U.S. News and World Report placed Central Washington University among the top elite public institutions in the west. CWU was 12th on the list in the 2006 On-Line Education of “America’s Best Colleges.”

Educational Background

Dr. William Allan Kritsonis earned his BA in 1969 from Central Washington University, Ellensburg, Washington. In 1971, he earned his M.Ed. from Seattle Pacific University. In 1976, he earned his PhD from the University of Iowa. In 1981, he was a Visiting Scholar at Teachers College, Columbia University, New York, and in 1987 was a Visiting Scholar at Stanford University, Palo Alto, California.

Doctor of Humane Letters

In June 2008, Dr. Kritsonis received the Doctor of Humane Letters, School of Graduate Studies from Southern Christian University. The ceremony was held at the Hilton Hotel in New Orleans, Louisiana.

Statistics

Views

Total Views
1,131
Views on SlideShare
1,131
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

William Allan Kritsonis - APA Corrections/Revisions William Allan Kritsonis - APA Corrections/Revisions Document Transcript

  • Practical Applications ofEducational Research and Basic Statistics William Allan Kritsonis, PhD Prairie View A&M University Lisa Horton, PhD Prairie View A&M University 1
  • Practical Applications of Educational Research and Basic Statistics William Allan Kritsonis, PhD & Lisa Horton, PhDPublished by National FORUM Journals 17603 Bending Post Drive Houston, Texas 77095Copyright 2007/2008 by William Allan Kritsonis, PhDExcept as permitted under the United States Copyright Act Of 1976, no part of thisprofessional publication may be reproduced or distributed in any form or by anymeans, or stored in a database or retrieval system, without the proper writtenpermission of Dr. William Kritsonis. Absolutely no unauthorized reproduction of thistext.ISBN: 0-9770013-4-2Library of Congress Cataloging in Publication Data$79.00 (United States)$89.00 (Canada)$99.00 (All others) Published in the United States of America 2
  • Practical Applications of Educational Research and Basic Statistics Author William Allan Kritsonis PhD Program in Educational Leadership Prairie View A&M University Member of the Texas A&M University System Prairie View, Texas Lisa Horton PhD Program in Educational Leadership Prairie View A&M University Member of the Texas A&M University System Prairie View, Texas 3
  • Dedication To all our students, past, present, and future. We wish to thank all the people who devotedly concerned themselves with our professional and personal development and improvement ACKNOWLEDGEMENTS The purpose of the text is to provide content and knowledge in the area of research with students at both the master’s and doctoral levels. A list of acknowledgements and credits is provided in the Partial Listing of Selected References and Acknowledgements at the end of this text.CONTENTS 4
  • PagePART I: Practical Applications of Educational Research and Basic Statistics ....6Chapter 1: Development of Research .................................................................7Chapter 2: Historical Research .........................................................................14Chapter 3: Descriptive Research ......................................................................18Chapter 4: Experimental and Quasi-Experimental Research ............................22Chapter 5: Qualitative Research .......................................................................30Chapter 6: Methods and Tools of Research ......................................................33Chapter 7: Descriptive Statistics and Normal Distribution ...............................39Chapter 8: Inferential Data Analysis ................................................................55Chapter 9: Parts of the Research Proposal .......................................................61Chapter 10: Parts of a Field Study ....................................................................67Chapter 11: General Statistics Information ......................................................73Chapter 12: Types of Statistical Data ...............................................................77 PageChapter 13: Descriptive Statistics ....................................................................81Chapter 14: Types of Distributions ..................................................................88 5
  • Chapter 15: Formulas .......................................................................................90Chapter 16: Understanding and Using Statistics. The Basics ..........................92Chapter 17: Getting Started With Research: Avoiding the Pitfalls ...................96Chapter 18: Ethics and Research ......................................................................99Chapter 19: Ethics in Research on Human Subjects and the role of theInstitutional Review Board. Frequently Asked Questions .............................101Chapter 20: Working with the IRB Suggested Frameof Mind for Researchers .................................................................................104Chapter 21: Research, Writing & Publication ...............................................106PART II: Fundamental Terms in Educational Researchand Basic Statistics .......................................................................................110Fundamental Terms in Educational Research and Basic Statistics .................111PART III: Partial Listing of Selected Referencesand Acknowledgements ...............................................................................144Partial Listing of Selected References and Acknowledgements .....................145PART IV: About the Authors .....................................................................154 6
  • PART I:Practical Applications ofEducational Researchand Basic Statistics 7
  • Chapter 1 Development of Research1. Key Points a. Observations b. Experience c. Intuition d. Hand me down e. Revelation f. Definition or Decree g. Philosophy or Logic h. Instinct2. Centuries ago, medicine men, religious authorities, and elders were knowledge sources? (No one questioned them.)3. With time, people began to observe orderliness and cause and effect relationships in the universe. Events were recorded and analyzed.4. Some things could be predicted. Events could be predicted in relation to the time of year and the seasons.5. This brought on a conflict. a. Religious authority versus curious thinkers b. Authority versus empirical evidence c. Elders versus personal experience6. People eventually began to think systematically. A few great thinkers led the way. 8
  • 7. Aristotle (Ancient Greece) a. First approach to reasoning. b. Deductive Method - moving from general assumptions to specific Syllogism 1) Major Premise: All men are mortal. 2) Minor Premise: Socrates is a man. 3) Conclusion: Socrates is a mortal.8. Centuries later-Francis Bacon a. Direct observation of phenomena b. Arriving at conclusions or generalizations through evidence of many individual observations led to inductive reasoning.9. Combining the deductive and inductive methods of reasoning results in the emerging of the scientific method or scientific approach.10. In 1930, John Dewey detailed the scientific method or scientific approach as follows: a. Identify and define a problem b. Formulate a hypothesis c. Collect, organize, and analyze data d. Formulate conclusions e. Verify or reject hypothesis, modify hypothesis There are many ways to specifically approach the scientific method and there are numerous generalizations of scientific approaches. The deductive approach is hypothesizing and anticipating the consequences of events.11. Researchers go back and forth--inductive-deductive-inductive-deductive. An example would be to hypothesize-observe and collect data-reject hypothesis-reformulate new hypothesis-observe and collect more data- partially accept hypothesis-then collect more data. 9
  • 12. Science 1) Definition: An approach to the gathering of knowledge, rather than a field of study. 2) Two Functions of Science i. Develop theory ii. Test hypotheses deduced from theory13. The Way a Scientist Works a. Empirical Approach - collect data b. Rational Approach - logical deductive reasoning14. Researcher attempts to develop theories and predict events in hopes of possibly controlling events. a. Piaget’s Theories - Cognitive development b. Behavior of gases - Air-conditioning, refrigeration c. Atomic Theory - Nuclear power d. Celestial Theory - Space travel, NASA, Satellites, and other technical advances.15. Two Types of Hypotheses a. Research Hypothesis (Alternative Hypothesis) (Symbol=Ha) 1) Affirmative statement that predicts a single outcome 2) Examples: i. Teaching Method A is better than Teaching Method B. ii. Cigarette smoking causes heart disease. iii. Extra curricular activities improve academic performance. iv. Computer Assisted Instruction improves academic achievement. v. Homework improves academic achievement. b. Null Hypothesis (Symbol=Ho) 1) This hypothesis is stated negatively so that the logic of statistical analysis can be applied. 10
  • 2) The null hypothesis is saying the difference, if any, is due to chance. 3) Rejecting the null hypothesis with a probability statement would support the research hypothesis (Ha). 4) Examples: i. There is no difference in heart disease between smokers and nonsmokers. ii. There is no difference in academic achievement between Method A and Method B. iii. There is no difference in grades between CAI students and non-CAI students. iv. There is no difference in academic achievement due to participation in extra curricular activities.16. Sampling Definitions a. Population-----------------------parameter b. Sample---------------------------statistic c. Sample: a small proportion of a population selected for observation and analysis d. Statistic: a value from a sample used to infer the parameters of a population17. Types of Samples a. Simple Random Sample: every subject has an equal chance to be selected b. Systematic Sample: every nth number c. Stratified Random Sample: subdivide population and select sample proportionally-A random sample of each of the subgroups is done. d. Cluster Sample: most complex of all samples, used for very large groups; costly and take time. 50 states---------------------Randomly choose 20 states. 20 states---------------------Randomly choose 80 counties. 80 counties------------------Randomly choose 50 school districts. 50 districts------------------Randomly choose 10 teachers from each of the 50 school districts. Total Sample 500 teachers 11
  • e. Non-probability Sample: (Use subjects available) f. Purposive Sample: participants are chosen not by chance but intentionally to yield data for evaluation purposes18. Sample Size (Test for Beta, or use a table.) a. The larger the sample, the less error. b. The larger the sample, the better the sample represents the population. c. In utilizing a survey, be certain to have a large sample. d. 32 (in a sample) is the magic number statistically, but e. Try to obtain more (with randomness)19. Purposes of Educational Research a. Fundamental or Basic: The purpose of this laboratory-type of research is solely to gain new knowledge. This research is often referred to as the search for knowledge for knowledge’s sake. b. Applied: The purpose is to improve a product (software, textbook, etc.) or process (teaching, learning, etc.)- testing a theoretical concept in a real actual problem situation. Most educational research is applied research. With the passing of time, basic research usually spurns further applied research. New knowledge gained eventually becomes useful and lends to advances in knowledge, which then directs more applied research to take place. c. Action: The purpose and focus are on immediate application-not on development of theory. The focus is on the here and now in a local setting.20. Two ways to Classify Research a. Quantitative Research: (Measuring) 1) Data are analyzed in terms of numbers. 2) Educational, medical, and agricultural professions use this type of classification. b. Qualitative Research: (Judging) 1) People and events are described without numerical data. This research consists of a rich, literal description in a prose form. 12
  • 2) Interviews of people, students, and other sources are used to collect information. Research is written in prose form.21. Assessment: Fact-finding activity that describes existing conditions22. Evaluation: Fact-finding with judgment added23. Types of Educational Research a. Historical 1) A description of what was. 2) Application of the scientific method to the use of historical data to answer historical questions or to test historical hypotheses. b. Descriptive 1) A description of what is. 2) Application of the scientific method to the acquisition and use of current data to describe current conditions c. Experimental: description of what will be where certain variables are carefully manipulated. d. Qualitative: uses non-quantitative methods to describe what is 1) Basically, data are interpreted without numerical analysis. 2) Interviews, videos, and other methods are used to gather information. 13
  • SUGGESTED ACTIVITIES:1. Divide into groups of 3-4. Discuss the following question: What is your definition of research, the steps you feel are needed to be taken to do research, and what types of research have you read or become familiar with in your profession and your educational experience? Share your group activity with the entire class.2. Each group should answer the following: What two things would you like to see changed in your profession or questions answered? How could you use research to address that change? What types of research could you use to answer your questions? How would you set up the type of research needed to answer these questions? Share your group activity with the entire class.3. Develop a research and a null hypothesis for each of the research ideas identified in the previous activity. Share your group activity with the entire class.WEBSITES:San Jose State University – http://www2.sjsu.edu/depts/itl/graphics/induc/ind-ded.html 14
  • Chapter 2 Historical ResearchKey Points1. Is an attempt to arrive at conclusions concerning causes, effects or trends of past occurrences that may help explain past and present events and predict future events.2. Historical research describes what was.3. Historical research involves investigating, recording, analyzing and interpreting events of the past.4. Sources of Information a. Primary Sources 1) Records and reports of legislative bodies, records and/or memoirs of superintendents, school newspapers, curriculum guides, grade books, along with other sources. 2) Interviews with superintendents, school board members, principals, teachers, and students. 3) Relics, such as buildings, furniture, textbooks and examinations. b. Secondary Sources 1) Reports of a person who relates the testimony of an eyewitness. 2) Encyclopedia, textbooks and newspaper accounts5. Characteristics of Historical Research a. Guided by hypotheses or questions to be answered b. Systematic collection of data c. Objective evaluation of data 15
  • d. Limited to available data e. Explanation—not just rehashing of the past—explains why it happened as it did f. May investigate individuals, ideas, movements, institutions, cultural circumstances, and movements g. Employs the scientific method6. Limitations/Problems with Historical Research a. Generalizations may not be feasible. 1) Too many uncontrollable factors. 2) Key individuals wield too much influence. 3) Situations won’t repeat themselves. b. Historical documents may not be reliable. 1) Were not written as objects of research 2) No objectivity 3) Often second—not firsthand information 4) Information is often incomplete. c. History is not verifiable by observation or experimentation. d. Significant variables cannot be manipulated. e. Lack of direct observation and control of variables f. Uniqueness cannot be replicated.7. Steps in Historical Research a. Define the problem b. Formulate the hypothesis or questions to be answered c. Collect data 1) Primary sources 2) Secondary sources d. Analyze the data 1) External criticism—authenticity i. Was this person really present? ii. Is this a real document from that time period? 2) Internal criticism—accuracy i. Did the person give an unbiased account of what happened? 16
  • ii. Is the document telling a true story or did the author have a “hidden agenda”? iii. Did anyone tamper with the document e. Synthesize data 1) Conclusions 2) Generalizations 3) Explanation or hypothesis f. Report findings and conclusionsSUGGESTED STUDENT ACTIVITIES:1. In groups of 3-4 locate the answers to the following questions:MAJOR QUESTION: How does your university compare today with theinstitution which was 50 years ago?SUBQUESTIONS:A. What academic programs were offered sixty years ago that were related to education?B. What types of school facilities were available then?C. What was the type of curriculum offered to students?D. How large was the student body?E. What was the ethnic make-up of the student body?F. What role did the school play in the community, state and nation?G. How many professors/instructors were employed?Compare and contrast the data from 50 years ago with today. 17
  • Chapter 3 Descriptive ResearchKeyPoints1. Characteristics of Descriptive Research a. Nonexperimental: deals with natural, not contrived relationships b. Variables are not manipulated. c. Ex post facto—a thing done afterward d. Involves disciplined inquiry (scientific method) e. Uses logical methods of inductive-deductive reasoning to arrive at generalizations f. Employs valid statistical procedures in collecting and tabulating data g. Employs valid statistical procedures in reporting results h. Adds to the body of knowledge2. Three Types of Descriptive Research a. Descriptive Research 1) This type of research is purely descriptive. 2) There is no hypothesis. 3) Researcher is just collecting data. 4) Example: 65% of principals are male; 35% are female. The average age of principals is 43; the average age of teachers is 38. 18
  • b. Correlational Research 1) In this research, the researcher is measuring the relationship between two or more variables. 2) The relationship between the variables may be strong, weak, or there could be no relationship. 3) Correlational studies can be used to predict. Example: ITBS scores and CAT scores have a correlation Coefficient of .8.c. Causal-Comparative Research 1) This type of research is interested in suggesting causation for the findings. It is aimed at discovering potential causes for a pattern by comparing a treatment group against a non-treatment group. 2) One should not say that a variable was the cause of an action, unless all other variables were controlled. Just identify the limitations of the study. 3) There is no experimental manipulative. 4) Example: Collective bargaining apparently had some effect on teacher job satisfaction since satisfaction levels were higher after collective bargaining than they were prior to collective bargaining. 19
  • SUGGESTED STUDENT ACTIVITIES:1. Divide into groups of four to five students. Develop a chart listing the different types of descriptive research. Compare and contrast each type of research. Provide at least three examples of each type.TYPE OF SIMILARITIES DIFFERENCES WITH EXAMPLESDESCRIPTIVE WITH OTHER OTHER TYPES OFRESEARCH TYPES OF DESCRIPTIVE DESCRIPTIVE RESEARCH RESEARCH1. Surveys Very similar to polls Use of a large number a. restaurant in that you collect of cases to describe to a questionnaire data according to a general population. b. general set of questions. You can collect data on satisfaction attitudes as well as survey for other practices, products occurrences, etc. Polls purchased are usually much c. (add more to smaller and are the the list) collection of attitudes.2.3.4.5.6.7.8.2. Describe how you can use both activity analysis and trend analysis to determine the types of teachers that will be needed in the next five years for both an urban and rural school district. Look at factors of the individ- ual’s job as well as the growth trends/declines and population changes (increase in retirees opposed to school age children) for the area. Select either an elementary, middle school or high school you are familiar with and use both types of descriptive research methods to determine what types of staff patterns would be needed for your school. 20
  • Chapter 4 Experimental and Quasi-Experimental ResearchKey Points1. Definition: determining what will happen under certain circumstances— a method of hypothesis testing—If this is done, what will happen? a. Immediate purpose: “prediction” in a local setting b. Ultimate purpose: “generalization” to a larger population2. Law of the Single Variable: If all variables are held constant except one, any changes in the outcome are due to changes in that one variable.3. Experimental Grouping a. Experimental Group vs. Control Group 1) Experimental Group: group exposed to variable under consideration 2) Treatment Group: same as experimental group 3) Control Group: group not exposed to variable under consideration b. Different Levels of the Same Variable: Subjects may also be grouped according to type of treatment, not just absent of treatment.4. Variables a. Definition: conditions or characteristics of the experiment that the experimenter manipulates, controls, or observes b. Independent Variable: variable manipulated by the researcher for grouping. 1) Treatment Variable: factor that can be controlled by the researcher 2) Organismic Variable: attribute of the subjects that cannot be controlled. 21
  • c. Dependent Variable: outcome; condition or characteristic that appears, disappears, or changes according to manipulation of the independent variable (Results). d. Confounding Variable: aspect of a study that can influence the dependent variable, which can be confused with the effects of the independent variable. 1) Intervening Variable: aspect of a study that may modify the effect of the independent variable upon the dependent variable. 2) Extraneous Variable: uncontrolled aspect of a study that is similar in effect to the independent variable and may render subjects’ grouping invalid.5. Experimental Validity a. Internal Validity: extent to which the independent variable, not extraneous variables, has a genuine effect on the dependent variable. b. External Validity: extent to which variable relationships established by the study can be generalized to other settings.6. Threats to Internal Validity a. Maturation: change in subject(s) over time b. History: events in the course of the study that may influence the dependent variable c. Testing: learning to take tests by taking tests d. Unstable Instrumentation: use of unreliable data gathering devices e. Statistical regression: regression to the mean: extremely low or high scores tend not to repeat themselves. f. Selection bias: nonequivalence of groups due to poor selection 22
  • g. Interaction of Selection and Maturation: When subjects can choose the group to which they will belong, the variable that directed their choices may have undue influence on the dependent variable. h. Experimental Morality: loss of subject(s). i. Experimenter Bias: If the researcher must evaluate a subject, prior knowledge of the subject may have undue influence on the researcher’s judgment.7. Threats to External Validity a. Interference of Prior Treatment: carryover of subjects’ knowledge or skill from a previous situation that may be mistaken for an effect of the independent variable. b. Artificiality of the Experimental Setting: condition in which the experimental setting is so controlled that it does not adequately imitate the real-life situation for generalizations to be made. c. Interaction Effect of Testing: condition in which a pre-test may sensitize subjects to concealed purposes of the study and serve as a stimulus to change. d. Sampling Deficiencies: error or inability in random selection. e. Lack of Treatment Verification: condition in which the treatment was not applied in the manner prescribed by the study. f. John Henry Effect: subjects work harder because they realize they are competing with others. g. Hawthorne Effect: subjects work harder because they are getting attention. This is due to researchers giving them extra attention. The experimental model comes from agricultural research.8. Controlling Threats to Experimental Validity a. Remove the Variable: variable is not considered in results. 23
  • b. Matching cases: selecting pairs with identical characteristics and assigning them to different groups c. Balancing Cases: assigning subjects to each group so that overall group means and variances will be equal d. Analysis of Covariance: statistical method that permits the experimenter to eliminate initial differences in the experimental groups e. Random Selection: assignment to experimental groups by pure chance; best way to make study valid f. It is difficult to eliminate all extraneous variables, therefore it is best to neutralize them. Remember, neutralize not eliminate!9. Experimental Design a. Definition: procedures of the study that enable valid conclusions by controlling the following: 1) Selection and assignment of subjects 2) Control of variables: independent and confounding 3) The gathering and treatment of data 4) Development of hypothesis 5) Statistical testing of hypotheses b. Purpose: elimination or neutralizing of threats to experimental validity10. Three Types of Experimental Designs a. Pre-Experimental Design: provides no way for equating groups that are used b. True-Experimental Design: uses random selection for equating groups that are used c. Quasi-Experimental Design: used when random selection is not available 24
  • 11. In studying experimental design, the following Campbell and Stanley symbols are used: a. R random assignment of subjects b X exposure of a group to a treatment . c. C exposure of a group to a control or placebo condition d O observation or test administered (data gathered) .12. What makes a good study? a. Having a control group and b. Using random selection13. Pre-experimental Designs a. The One-Shot Case Study Design 1) X O 2) No random selection and no control group b. The One-Group, Pretest, Posttest Design 1) O X O 2) No random selection, no control group, and interference of variables c. The Static-Group Comparison Design 1) X O C O 2) No random selection 25
  • Pre-experimental design, the least adequate of designs, is characterized by the lack of a control group or a lack to provide for the equivalence of one.14.True Experimental Design a. The Posttest-Only, Equivalent-Groups Design 1) R X O R C O 2) Has random selection; has control group b. The Pretest-Posttest, Equivalent-Groups Design 1) R O X O gain (X) = O – O (pretests) R O C O gain (C) = O – O (posttests) 2) Has random selection; has control group c. The Solomon Four-Group Design 1) R O X O R O C O R X O R C O 2) Has random selection; has control group 3) Difficult to find enough subjects15.Quasi-Experimental Designs a. The Pretest-Posttest Nonequivalent-Groups Design 1) O X O O C O 2) No random selection 26
  • 3) Pretest is used as covariate. b. The Time-Series Design 1) O O O O X O O O O 2) No random selection c. The Equivalent Time-Samples Design 1) O X O X O X O X O 2) No random selection d. The Equivalent Materials, pretest, Posttest Design 1) O X O O X O 2) No random selection 3) Can be conducted with just one group or two separate groups16. Factorial Designs: used when more than one independent variable is involvedSUGGESTED STUDENT ACTIVITY:1. Develop a Study (What problem do you want to address or solve?)2. Why would I do it?3. What do I already know or what has already been done on this problem?4. What are your hypotheses/Research Question? (Research and null)5. What would you do to conduct the research? (Steps, who to talk to, permission for research, what instruments to collect data?)6. Who are your participants?7. How will you collect the data?8. How will you interpret the data? 27
  • 28
  • Chapter 5 Qualitative ResearchKey Points1. Qualitative research is sometimes called naturalistic inquiry.2. The main reason that we have qualitative research is to explain phenomena.3. Qualitative research is done often as supplemental research.4. Three Data Collection Methods of Qualitative Research a. Interview: Teachers, secretary, janitors, and other individuals in the school. b. Observations: Observe what goes on in gyms, cafeteria, library, classrooms, and hallways. c. Analyze written documents and records: test scores, attendance records, discipline reports—suspension and expulsion ratio— When you analyze these, you often employ quantitative steps, such as more than half, 60% etc.5. Triangulation is the use of multiple data collection techniques. For example, it could include interviews, observations, and an analysis of documents or records. It could be any two or all three. One could interview three people from different backgrounds on the same topic.6. The advantage of using multiple data collection techniques is that the researcher gets a broader or more in-depth view of a school or a situation. Reality will reveal itself this way.7. Data are interpreted without using mathematical analysis.8. The study is attempting to address four concerns. 29
  • a. The study is concerned with things that a number cannot answer about a school, such as spirit, atmosphere, great extra-curricular activities, and educational quality. b. Real-world situations are studied—without manipulations. c. Specific questions are asked. d. It is a rich detailed description.9. The disadvantage is that the researcher may get too close to the people being interviewed. This can bias a study.10. It is important to have empathic neutrality—complete objectivity is impossible. Try to stay neutral and objective. Try to define any potential bias.11. Five Key Things the Researcher Should Do a. Pre-organize: Organize ahead of time the things that you need to do. b. Collect the data. c. Organize the data. d. Interpret the data. e. Write a report.12. In qualitative research, the researcher is bringing reality to a study. A qualitative study can supplement A quantitative study, which will present A better picture of reality and truth. 30
  • SUGGESTED STUDENT ACTIVITY:1. Divide into groups of four to five students. As a group identify an area of concern that you could develop a brief questionnaire to gather data. (Examples could be: a) amount of additional fees charged to students at registration; b) is recess beneficial to the academic development of children? c) views on a policy issue in your graduate program, etc.) Each member should write down five things they feel are important/their views on the topic. Compare and contrast the viewpoints among the group members. Are there patterns of concern or do you find a variety of views on the topic.2. Identify the steps needed to collect data on the topic discussed in activity #1. What can each group member do to ensure they do not let their own biases effect the collection of data? How could triangulation be used to collect data on your group’s topic of interest? 31
  • Chapter 6 Methods and Tools of ResearchKey Points1. Qualities of a Good Test a. Validity: A test is valid if it measures what it purports to measure. b. Reliability: A test is reliable if it measures consistently over time. c. A test can be reliable but still not be valid. d. If a test is valid, it should be reliable and usually is reliable.2. Types of Validity a. Content: Questions should deal with content covered and the objective taught. b. Face: On the surface, it looks like a valid test or questionnaire. c. Criterion: Two Types 1) Predictive: It can predict success in a certain criteria. 2) Concurrent: It is closely related to other measures. d. Construct: Some other common measure is compared with the construct. 32
  • 3. Correlation Coefficient: The procedure quantifies the relationship of paired variables. Example: -1 0 .7.8.9 1 These numbers indicate a high correlation.4. Buros Mental Measurements Yearbook can be helpful when you want to compare Test A with Test B. It provides reviews of tests.5. Helpful Suggestions for Constructing Your Own Test or Questionnaire a. Secure a panel of experts to assist you in constructing your questions, such as professors of English and research. b. Pilot the test or questionnaire. Administer it to ten to fifteen people who will not be a part of your actual study. Score it and calculate the Cronbach Alpha Coefficient for each of the test items to determine reliability of the instrument. c. Some time later, repeat the process of administering the test/ questionnaire to the same individuals, and again calculate the Cronbach Alpha Coefficient. d. The scores should be nearly the same. The correlation coefficient should be high (a Cronbach Alpha of .62 or higher is considered acceptable for social science research). e. It would also be beneficial for you to ask teachers to provide suggestions for improvement. f. It is to your advantage to use a professionally prepared questionnaire. Remember to get permission from the publisher.6. Types of Reliability of Test or Questionnaire/Opinionnaire 33
  • a. Stability over time (test-retest): This is a very important aspect. b. Stability over item samples: Equivalent or Parallel forms. Example: If there are 50 questions on a test or questionnaire, answer only the odd numbered items. Score this part. Next, answer only the even numbered items, and score this part. Your score should be very close on each part. This is also true for different forms of a test. c. Stability of items (internal validity): All test questions should have commonality (similarly related). ⇒ Kuder-Richardson Test (KR 21): This is the average of all possible correlations (of split halves). d. Stability over scorers (inter-scorer): Scorers must be consistent in scoring criteria. They must not be biased. e. Stability over testers: Testers must be consistent in test administration. f. Standard error of measurement: To determine the standard error of measurement the scores will be put into a formula and calculated. g. No test is totally reliable or valid. h. If you have a valid test, it is probably reliable.7. Characteristics of a Good Questionnaire a. Covers a significant topic. b. Looks important to respondent—State significance of topic. c. Only seeks information that is not obtainable otherwise d. Short as possible, clear and easy to complete e. Attractive, neat, easy to duplicate. f. Clear directions, define important terms g. Avoid asking two questions in one item: Keep questions short and concise. 34
  • h. Ask objective questions. Do not ask leading questions. i. Questions should be presented from general to specific. j. Avoid annoying, embarrassing questions. k. If delicate questions are included, inform participants that all answers will be kept anonymous. Code questionnaires to keep them anonymous and to enable the researcher to identify which ones have been submitted and which ones have not. l. Easy to tabulate and analyze. m. Computer tabulate, if possible.8. Preparing the Questionnaire a. Randomly mix subtest questions. b. Give the questionnaire to friends to complete in order to obtain feedback. c. Pilot it in order to establish reliability. d. Get permission from principal and superintendent to conduct research. e. Include permission letter with the mailed questionnaire. f. Include the following in the mail out: 1) Cover letter 2) Permission letter 3) Questionnaire g. Inform participants that all information will be kept anonymous and keep it anonymous. h. Enclose a stamped, self-addressed return envelope. i. Code the questionnaire for follow-up. j. Inform participants the questionnaire is coded. k. Scale to use. 35
  • Note: If one must use a scale, the Likert scale is the most common and the most practical.9. General Information Regarding Questionnaires a. If you modify a questionnaire 25% or less, it is still valid. If you modify it more than 25%, it is not valid. b. To validate a questionnaire, get a group of professionals to review it. c. When an instrument is reliable, it gets the same results over a period of time. d. A questionnaire must be reliable and valid. e. To determine the reliability of a commercial test, the researcher should write to the publisher of the test and request verification of test validity. The publisher will provide this information to you. Buros Mental Measurement Yearbook is available in university libraries. This yearbook gives summaries of instruments.SUGGESTED STUDENT ACTIVITY:1) Continue your activity from chapter 5. Develop a questionnaire (8–10 questions) on your group’s topic of interest. Include only open-ended questions on the questionnaire. (Other types of questions, other than open-ended, might provide quantitative data instead of qualitative.) Share this questionnaire with other groups in your class to determine if questions are clear and easy to understand and answer. (Decide if data will be collected through passing out a questionnaire or by a face-to-face interview. REMEMBER, FOR THE RESULTS TO BE RELIABLE, EACH QUESTIONNAIRE MUST BE ADMINISTERED WITH THE SAME METHOD!)2) Pass out your questionnaire or conduct a face-to-face interview to ask other individuals outside your class to respond to your questions. As a group, review the data you have collected. Look at the data gathered on each of your questions. Look for main themes and concerns or ideas. Interpret what the findings mean and how the results could be used to make changes, keep the status quo, etc. Report your findings back to your class. 36
  • Chapter 7 Descriptive Statistics and Normal DistributionKey Points1. The reason for statistics is that there are numerical data in educational research. You will have to interpret, understand, and treat data.2. Two Ways to Classify Numerical Data: a. Non-parametric Data: Data that are not normally distributed 1) Nominal a) Names or classifies someone or something b) Examples i. Social security numbers ii. License plate numbers iii. Bank account numbers iv. Student identification numbers c. Not very useful in research 2) Ordinal a. Names, classifies, and ranks someone or something b. Examples i. Class rank ii. Sports rank b. Parametric Data: Data that assume normality 1) Interval a. Names, classifies, ranks, and has equal intervals between numbers b. Has no true zero point 37
  • 2) Ratio a. Names, classifies, ranks, has equal intervals, and has a true zero b. Examples i. Test scores ii. Height of students3. Descriptive Statistics: includes Measures of Central Tendencies and Measures of Variability (also referred to as Spread, Dispersion, or Scatter) a. Measures of Central Tendencies 1) The mean is the arithmetic average. i. The symbol for the mean is X . ∑X ii. b. =X N iii. The mean indicates the arithmetic midpoint; it is the best measure of centrality. iv. Example: 2 4 4.8 = X 5 5 24.0 6 20 7 40 ∑ X = 24 40 N = 5 X = 4.8 b. The median is the midpoint when the numbers are placed in an ascending or descending order. c. The mode is the number that occurs most often in a data set. 38
  • d. One purpose of the mean and median is to represent the “typical” score. e. When the distribution of scores is such that most scores are at one end and there are relatively few at the other end (skewed distribution), it is better to use the median because it is a better indicator of test scores. 1) In a positively skewed distribution, the mean is pulled to the right of the median. 2) In a negatively skewed distribution, the mean is pulled to the left of the median.4. Measures of Variability (may also be referred to as the Spread, Dispersion, or Scatter) a. Range: the highest number minus the lowest number b. Sum of Squares: sum of squared units of deviation from the mean 1) Symbol: SS ( 2) Formula: X − X )2 c. Variance: the average squared units of deviation from the mean 1) Symbol i. Sample: S 2 ii. Population: σ 2 2) Formulas: 2 − (∑ X )2 i. ∑X N N SS ii. N 39
  • iii. The variance is a value that describes the distance that scores are dispersed or spread from the mean. iv. This value is very useful in describing the characteristics of a distribution. d. Standard Deviation: average units of deviation from the mean 1) Symbol i. Sample: S ii. Population: σ 2) Formulas i. σ 2 2 − (∑ X )2 ii. ∑X N N5. Normal Distribution (also referred to as Z Distribution, Z Theory, Normal Curve, and Bell-Shaped Curve). a. Characteristics of a Normal Curve: 1) It is symmetrical. 2) The mean, median, and mode are all at the same point – right down the center. 3) The curve is the highest at the mean. 4) Most of the scores cluster or crowd around the mean and decrease as they move away from the mean. 5) The curve theoretically never touches the baseline. b. Some things in nature are close to being normally distributed, such as the height of men and women, I.Q. test scores, and shoe sizes. c. To get a normal distribution, sample size should be at least 32. 40
  • 6. Normal curve Percent of cases under portions of the normal curve 34.13% 34.13% 13.59% 13.59% 2.15% 2.15% .12% .12% (Standard Deviation) -4 -3 -2 -1 0 1 2 3 4 68.26% Percentage of frequencies in a 95.44% normal distribution 99.74% 99.98% Very few scores will extend above or fall below three standard deviations from the mean.7. Normal Distribution Percentiles 41
  • Percent of cases under portions of the normal curve 34.13% 34.13% 13.59% 13.59% 2.15% 2.15% .12% .12% (Standard Deviation) -4 -3 -2 -1 0 1 2 3 4 .1% 2.3% 15.9% 50% 84.1% 97.7% 99.9% (Percentiles) Very few scores will extend above or fall below three standard deviations from the mean.8. Two Ways of Computing Variance and Standard Deviation a. Conceptual Way: 42
  • (raw ( SS ) (σ ) 2 (σ ) score) X (X − X ) ( X − X )2 SS N σ2 2 6 -4 16 8 4 6 -2 4 5 40 Square of 8 = 2.8 6 6 0 0 40 8 6 +2 4 10 6 +4 16∑ X =30 0 40 ( (Sum of Squares) ( (Variance) (Standard deviation) Measures of Measures of Central Tendencies Variability X =6 SS = 40 Md = 6 σ2 =8 σ = 2.8 43
  • b. Computational Way 2 (∑ X )2 2 (∑ X )2 ∑X − ∑X −X X2 N N N N 220 − ( 30) 2 ( 30) 2 2 4 220 − 5 5 5 5 4 16 6 36 8 64 10 100∑ X = 30 220 900 900 220 − 220 −X =6 5 5 5 5 65 30 220 − 180 220 − 180N =5 5 5 40 40 =8 5 5 8 = 2.8 44
  • 9. Correlation a. Correlation is the linear relationship between two or more variables. b. The degree of linear relationship is measured by correlation coefficient. 1) The symbol is “r” for Pearson’s r. (Karl Pearson) 2) Types of correlation i. Positive correlation a) A perfect positive correlation is +1, which is rarely if ever encountered. b) Correlations of .7, .8, and .9 indicate a high positive correlation. c) Examples of positive correlation: As one increases, the other has a tendency to increase. ⇒ high IQ and high GPA ⇒ height and shoe size Example of a positive correlation – As scores in X go up, scores in Y go up Time spent Studying Grades on Test X Y John 1 2 Bob 2 4 Mark 3 6 Bill 4 8 Jeff 5 10 45
  • b. Negative correlation 1) A perfect negative correlation is -1, which is rarely if ever encountered. 2) Examples of negative correlation: As one increases, the other has a tendency to decrease. ⇒ Total oil production and price per barrel ⇒ More graduate courses taken in college and free time Example of a negative correlation – As scores in X go up, scores in Y go down. Time spent Studying Grades on Test X Y John 1 5 Bob 2 4 Mark 3 3 Bill 4 2 Jeff 5 1 3) iii. A negative correlation does not necessarily mean that a bad situation exists. For example, a person who increases exercise would likely lose weight. c. No correlation 1) A perfect lack of correlation is zero; however, rarely would it fall exactly on zero, such as in case of 1, .2, or .3 2) Examples of no correlation ⇒ Height and IQ ⇒ Total rice production and the price of gold10. Three ways to Interpret Coefficient of Correlation (Pearson’s r) 46
  • a. .90 .80 .70 Rule (high) (strong) (moderate) 1) .90 indicates a very strong relationship. 2) .80 indicates a strong relationship. 3) .70 indicates a moderate relationship. 4) .60 indicates a fair relationship. 5) Below .5 indicates that it may be due to chance. 6) There is a stronger indication that no relationship exists as the number gets closer to zero, such as .2 and .3.b. r2 = Coefficient of Determination: When the percent of X is known, one could determine a percent of what Y would be.An estimate of common variance between variables can be determined bysquaring the correlation coefficient. 1) Formulas (∑ X ) (∑Y ) ∑ XY − r= N ∑ X 2 − (∑ X )  ∑Y 2 − (∑Y )   2 2  N  N     ↑ ↑ Sum of Squares Sum of Squares of X of Y (∑ X ) (∑Y ) ∑ XY − N r= ( SS X ) ( SSY ) 47
  • 2) Example X X2 Y Y2 XYJohn 1 1 2 4 2Bob 2 4 2 4 4Bill 3 9 3 9 9 Joe 4 16 4 16 16Sam 5 25 5 25 25 ∑ 15 55 16 58 56 56 − (15) (16)r= 5 (10) ( 6.8)SS X = ∑ X 2 − (∑ X )2 = 55 − 15 2 = 55 − 225 = 55 − 45 = 10 N 5 5SSY = ∑ Y 2 − (∑Y ) 2 = 58 − 16 2 = 58 − 256 = 58 − 51.2 = 6.8 N 5 5 56 − 48 68 8 8.2 Pearson’s r = .97 (very high correlation) 48
  • X and Y have a lot in common. r 2 = .94 (Given X, one could tell 94% of the time what Y would be. 3) Coefficient of Determination: Given X, one could determine 94% of the time what Y would be. 4) Since correlation is concerned with prediction, it is more difficult to predict the correlation as the correlation goes down. c. t test: The test of the significance of the difference between two means: 1) Think of a t-test as a correlation turned inside out. 2) A t-test indicates the difference between numbers, whereas a correlation indicates the similarities between numbers.11. Measures of relative position: standard scores a. z score 1) When comparing scores in distributions where total points may differ, a z score permits a realistic comparison of scores and may allow equal weighting of the scores. 2) Formula X−X z= σ X = raw score X = mean σ = standard deviation 49
  • 12. Normal Distribution Problems Directions: Treat each of the following as if distribution is normal. What percent of scores lies between the two z scores for each of the following pairs? (1) 3 and -3 ______ (5) 1 and -1 ______ (9) -.5 and 1.2 ______ (2) 0 and 1 ______ (6) 0 and .5 ______ (10) 1.3 and 2.4 ______ (3) 0 and 6 ______ (7) 1 and -2 ______ (11) 1.5 and -1.5 ______ (4) 2 and -2 ______ (8) 0 and -6 ______ (12) 0 and 2 ______ Directions: Treat each of the following as if distribution is normal. Identify the z score for each of the following percentiles. (13) 50th percentile ______ (19) 99th percentile ______ (14) 60th percentile ______ (20) 40th percentile ______ (15) 65th percentile ______ (21) 30th percentile ______ (16) 70th percentile ______ (22) 16th percentile ______ (17) 90th percentile ______ (23) 5th percentile ______ (18) 95th percentile ______ (24) 75th percentile ______Directions: Treat each of the following as if distribution is normal.Population mean is 32. Population standard deviation is 3.Identify the z score for each of the following raw scores. (25) 29 _____ (28) 35 ______ (26) 38 _____ (29) 26 ______ (27) 28 _____ (30) 33 ______ 50
  • Directions: Treat each of the following as if distribution is normal. Whatpercent of scores lie between each of the following pairs of raw scores?(population mean = 32 population standard deviation = 3) (31) 32 and 35 ______ (36) 23 and 41 ______ (32) 29 and 26 ______ (37) 32 and 30 ______ (33) 38 and 41 ______ (38) 26 and 23 ______ (34) 32 and 33 ______ (39) 23 and 20 ______ (35) 35 and 38 ______ (40) 32 and 34 ______SUGGESTED STUDENT ACTIVITIES:Divide into groups of two to three students. USE YOUR CALCULATORS!Use the following set of score to complete the following exercises:63, 79, 88, 88, 87, 89, 89, 90, 90, 90, 93, 94, 95, 95, 98, 991. Compute the mean of the set of scores listed above.2. Determine the median of this set of scores.3. Does the mean differ from the median? Why or why not?4. Find the range of this set of scores.5. What is the mode of this set of scores?6. Compute the variance of this set of scores.7. Compute the standard deviation.8. Using the mean and the standard deviation, plot these test scores to see where they fall in a distribution around the mean.9. Compare and contrast positive and negative correlation. 51
  • Chapter 8 Inferential Data AnalysisKey Points1. Central Limit Theorem a. The characteristics of sample means are detailed by this theorem. b. Characteristics of sample means 1) Sample means are normally distributed. 2) The mean of sample means will be the mean of the population. 3) The sample means will have a mean (population mean) and a standard deviation.2. Null Hypothesis a. A null hypothesis states that if there is a difference, it is due to chance. b. By rejecting a null hypothesis, the researcher is providing a stronger test of logic. c. Additionally, by rejecting the null hypothesis, the researcher is concluding there is a significant difference between the two means, and this difference is not due solely to chance. d. The .05 alpha level is often used as a standard for rejecting the null hypothesis, which means that 95 times out of 100 the results are not due to chance. e. The .01 alpha level is a more rigorous test. It means that 99 times out of 100, the results are not due to chance. 52
  • 3. z test: One-tailed Test a. One-tailed Test at the .05 alpha level. b. A researcher thinks the scores of the sample will be superior to established scores. Acceptance Area 95% Rejection Area 5% X +1.65 (z score) 95% Acceptance Area 53
  • 4. z test: Two-tailed Test at .05 alpha level a. Two-tailed test at the .05 alpha level. b. A researcher thinks the scores of the sample will be different from the established scores. Acceptance Area Acceptance Area 47.5% 47.5% Rejection Area Rejection Area 2.5% 2.5% -1.96 X +1.96 95% Acceptance Area5. Critical value for z (rejection of null) Test .05 alpha level .01 alpha levelOne-tailed test 1.65 2.33Two-tailed test 1.96 2.586. Degrees of Freedom 54
  • a. Definition: Conceptually, always N-1. b. As the number of degrees of freedom increases, the strength of the prediction increases.7. Four Main Types of Tests Used in Educational Research a. Independent t Test (very useful test) 1) Characteristics 2) No population mean 3) No σ 4) Compares the means of two different independent groups 5) Example 6) Group X has been taught with Method A; compute the mean. 7) Group Y has been taught with Method B; compute the mean. 8) The researcher wants to determine if one method is better than the other method. 9) Formula for Independent t Test X −Y =  ∑ X − 2 (∑ X )2  +  Y 2 − (∑Y )2   ∑ Independent t   N     N   n ( n − 1) X −Y SS X + SSY N ( N − 1) (Degrees of Freedom) 55
  • 4. Used in medical, agricultural, and educational research b. Correlated t Test (paired) (very useful test) 1) Characteristics i. Pre and post tests (pairs) ii. Only involves one group iii. c. D = X −Y 2) Formula X −Y = 2 − ( ∑ D) 2 Correlated t ∑D N N −1 N3. Example a. Pretest each group then compute the mean. b. Teach group using a special method. (The treatment) c. Post test the group and then compute the mean. d. The researcher wants to determine if there is a significant difference between the pre- and post mean. If there is a significant difference, then the special teaching method id helpful. (Null hypothesis is rejected.) 56
  • c. Analysis of Variance (ANOVA) 1) The Independent t Test is a subset of ANOVA. 2) Characteristics i. Involves three or more groups. ii. All groups are treated differently. 3) Also referred to as the F Test, which was named after the man who invented the test. 4) Formulad. Pearson’s r (correlation) 1) Characteristics i. Measures the degree of relation between two variables. ii. Determines the degree of linear relationship between two variables. 2) Formula (∑ X ) (∑Y ) ∑ XY − N  ∑ X − 2 (∑ X )2   Y 2 − (∑Y )2   ∑   N  N     57
  • Chapter 9 Parts of the Research ProposalNote: The research proposal is a framework for any research study. Aproposal should also clearly and succinctly reveal your intended plan. Inmost instances, university policy and specifications for the length of researchproposals are adopted; however, it is quality not quantity that is importantwhen writing a prospectus for research.1. Title Page a. Use enough descriptive words to catalog it by ERIC and Resources in Education. b. Example: The Effects of Collective Negotiations on Teacher Job Satisfaction in the Temecula School District in southern California.2. Introduction to the Study a. This part should be relatively short and capture the reader’s attention. b. It describes what the study will cover and should be written in a manner that will make the reader interested in the topic. c. A brief background of where the study will be conducted may be included. d. The operative word for this section is “brief”. Keep in mind, this is a proposal not the completed study.3. Review of Literature a. This component reviews pertinent literature and information relevant to your topic. b. Previous research should be included. c. Five to 10 citations are satisfactory for the proposal. 58
  • d. Citations should be relevant and recent.4. Statement of the Problem a. This part logically establishes the different underlying intellectual motives for conducting the research on this specific topic. b. Opposing conclusions are a good way to set up the statement of the problem. c. Example: There appears to be opposing conclusions in the research concerning collective bargaining and its effect upon the plight of the teacher. Smith (2005) found that the bargaining had not benefited teachers. Jones (2005) noted that bargaining had greatly enhanced teacher morale.5. Purpose of the Study a. This section succinctly describes what the researcher intends to find. b. Example: The purpose of this study is to determine the extent to which the collective bargaining process has influenced teacher job satisfaction levels.6. Research Questions a. In this part, you will break down the Purpose of the Study into several pertinent research questions. b. It is important for the following parts to fall logically in line: 1) Statement of the Problem 2) Purpose of the Study 3) Research Questions c. Examples: What was the level of teacher job satisfaction before bargaining rights? What was the level of teacher job satisfaction after bargaining rights? 59
  • 7. Hypotheses a. The research questions are put in statistical terms in this section. b. Example: There is no significant difference in teacher job satisfaction following the acquisition of bargaining rights.8. Definitions a. In this part, define terms specific to your study that may not be familiar to the outside reader. b. Specifically define general terms the researcher assumes all individuals would know but might be different in different school districts in a state, region or nation. c. Example: TAE-The school district affiliate of the National Educational Association—Sixty-nine percent of all Temecula School District teachers are members of this organization.9. Assumptions a. Any assumed aspect the researcher may take should be duly stated. b. Example: The instrument used in this study will accurately measure the job satisfaction levels of teachers.10.Limitations a. Any boundary or limitation of the study must be stated. b. Example: The study will measure levels of teacher job satisfaction in only one school district. Teachers surveyed may vary in years of experience. 60
  • 11. Methodology a. This section includes the following four parts: 1) Subjects i. Describe subjects or sample (who and where). ii. The population may be described in this part. 2) Instrument i. Give details about the test or instrument and specific materials. ii. Validity and reliability may be discussed. 3) Procedures i. Describe a step-by-step process of the researcher’s plan of action. ii. The timeline and permission to conduct the study may be included. 4) Data Analysis i. Describe how the data will be analyzed. ii. The following information should be included: iii. The type of statistical test that will be used, whether or not means will be compared, and whether or not charts or graphs will be included.12.Significance of the Study a. State why this study is worthy of the time and effort that will go into it. b. Substantiate the reasoning behind conducting a study of this type in this district, state or region. c. Example: Data derived from this study will serve as a guide to school districts in similar settings that are also considering the collective bargaining process. 61
  • 13.References a. References should be relevant, recent, and cited in the American Psychological Association (APA), Modern Language Association (MLA), or any other required format. b. A sufficient amount of references should be used. The number of references will vary depending on the topic and resources available.SUGGESTED STUDENT ACTIVITIES:1. Divide into groups of four to five students. Every group member should contribute at least one area of concern that they would like to solve in their role as educators. Identify one area of concern that is important to the entire group. This will become the purpose of your study. Write three to five research questions (what you want to know about the area of concern).2. Develop three to five hypotheses for your group study.3. Define terms that may not be familiar to the outside reader that would be related to your study.4. Identify the methodology that would be used for your study. (Subjects, instrument to be used to collect the data, procedures to be used to collect the data, include a timeline of when this would be done, and the type of statistical test you would use to analyze the data you will collect.) 62
  • Chapter 10 Parts of a Field StudyNote: Parts of the Field Study have been discussed in the section entitled “Parts of a Research Proposal,” therefore only their titles will be listed in this section. Additional parts and those parts that need to be expanded will be listed and discussed in this section.1. Title2. Abstract a. This is a summary of the complete study. b. It is usually around a page in length.3. Table of Contents a. List the chapters of the study. b. List only the page number on which each chapter begins.4. Chapter 1: Introduction to the Study a. This chapter includes the following parts: 1) Introduction to the Study 2) Statement of the Problem 3) Purpose of the Study 4) Research Questions and/or Hypotheses 5) Definitions 6) Assumptions 7) Limitations 8) Significance of the Study b. This chapter is basically the proposal minus the Review of the Literature and the Methodology. 63
  • 5. Chapter 2: Review of the Literature a. Expand the review of the literature. b. Ten to twenty citations are sufficient. c. Remember to keep the citations recent and relevant.6. Chapter 3: Methods and Procedures a. This is basically the part in the proposal that was labeled Methodology. It will be expanded. b. Describe in detail what was done in the study. c. Some information in this section may have to be changed because the information here will state what was actually done, not what the researcher planned to do as was stated in the proposal.7. Chapter 4: Analysis of Data or Results of Study a. Describe in prose and in chart or graph form the numerical results of the study. b. Do not explain, summarize, or conclude in this chapter. c. Tell and show only the results. Do not attempt to explain the results.8. Chapter 5: Summary, Conclusion, and Recommendations a. Summarize the results of the study. b. An explanation may be given as to why the results turned out as they did. c. Try to consider all factors and variables that could have influenced the dependent variable. d. Recommendations for further study in regard to this topic should be included. e. Further study could likely be conducted on this issue at another school or in a slightly different manner.9. References 64
  • 10. Appendices a. Make a list of the location of specific tables, charts, or graphs. b. Remember to include the chapter and page number. A CHECKLIST OF ITEMS FOR TRADITIONAL FIVE CHAPTER DISSERTATIONS & THESESThe following is a checklist of items which are typically included in a graduate research project,thesis, or dissertation. Not all of the suggested categories are necessary or appropriate for allstudies, and the order of items within chapters may vary somewhat. These items are intended toserve as a guide:CHAPTER 1: INTRODUCTION________ Introduction________ Background of the problem (e.g., educational trends related to the problem, unresolved issues, social concerns)________ Statement of the problem (basic difficulty - area of concern, felt need)________ Research Questions to be answered or investigated________ Hypothesis or Hypotheses statements if needed or specified by advisor.________ Purpose of the study (goal oriented) -emphasizing practical outcomes or products________ Importance of the study - may overlap with the statement of problem situation________ Assumptions (postulates)________ Delimitations of the study (narrowing of focus)________ Limitations of the study________ Definition of terms (largely conceptual here; operational definitions may follow in Methodology Chapter)________ Organization of the Study....Outline of the remainder of the thesis or proposal in narrative form.CHAPTER II: REVIEW OF RELATED LITERATURE________ Organization of the present chapter - overview________ Historical background (if necessary)________ USE KEY WORDS in each Research Question and follow with the literary review that addresses each question.Purposes to be Served by Review of Research Literature________ Acquaint reader with existing studies relative to what has been found, who has done work, when and where latest research studies were completed, and what approaches involving research methodology, instrumentation, and statistical analyses: were followed (literature review of methodology sometimes saved for chapter on methodology)________ Establish possible need for study and likelihood for obtaining meaningful, relevant, and significant results________ Furnish from delineation of various theoretical positions, a conceptual framework affording bases for generation of hypotheses and statement of their rationale (when appropriate)________ Organize this chapter in the same order as the research questions are stated in chapter I. Be very careful to fully align the review of literature with the research questions. Note : In some highly theoretical studies the chapter "Review of Literature" may need to precede "The Problem" chapter so that the theoretical framework is established for a succinct statement of the research problem and hypotheses. In such a case, an advance organizer in the form of a brief general statement of the purpose of the entire investigation should come right at the beginning of the "Review of Literature" chapter. 65
  • Sources for Literature Review________ General integrative reviews cited that relate to the problem situation or research problem such as those found in Review of Educational Research, Encyclopedia of Educational Research, or Psychological Bulletin.________ Specific books, monographs, bulletins, reports, and research articles --- preference shown in most instances for literature of the last ten year.________ Unpublished materials (e.g.. dissertations. theses, papers presented at recent professional meetings not yet in published form, but possibly available through another source.________ Selection and arrangement of literature review often in terms of questions to be considered, hypotheses set forth, or objectives or specific purposes delineated in problem chapter ________ Summary of literature reviewed ( very brief)CHAPTER III: METHODOLOGY or the recipe/how to chapter________ Overview or at least an introduction________ Restate the research questions________ Hypotheses stated in NULL FORM.________ Description of research methodology or approach (e.g., experimental, quasi-experimental, correlational, causal-comparitive, or survey)________ Research design Spell out independent, dependent variables________ Subjects of the Study (Clearly describe the sample and population.)________ Instrumentation (tests, measures, observations, scales, and questionnaires)________ Pilot studies (as they apply to the research design, development of instruments, data collection techniques, and characteristics of the sample)________ Validity--provide specifics on how you will establish validity or provide validity data specific to your instrument from other studies with similar populations________ Reliability--provide specifics on how you will establish reliability or provide data specific to your instrument from other studies with similar populations________ Procedures (Field, classroom or laboratory e.g., instructions to subjects and etc.________ Data collection and recording________ Data analysis (statistical analysis or qualitative analysis explained in detail)________ SummaryCHAPTER IV : ANALYSIS OF DATA________ Findings are presented in tables or charts when appropriate________ Findings are reported with respect to furnishing evidence for each question asked (ORGANIZED IN THE SAME ORDER AS HEADINGS IN CHAPTER I & III) or each hypothesis posed.________ Appropriate headings are established to correspond to each main question or hypothesis considered________ Other factual information kept separate from interpretation, inference, and evaluation (one section for findings and one section for interpretation or discussion) Note: In certain historical, case-study and other types of investigations, factual and interpretive material may need to be interwoven to sustain interest level, although the text should clearly reveal what is fact and what is interpretation.________ Separate section often entitled "Discussion", "Interpretation", or "Evaluation" ties together findings in relation to theory, review of literature, or rationale________ Summary of chapterCHAPTER V : SUMMARY, CONCLUSIONS, RECOMMENDATIONS________ Brief summary of the study and findings portion of Chapter IV________ Conclusions (Often restatement of the research questions key topics or variables and final conclusions analyzing the answers) 66
  • ________ Recommendations (practical suggestions for implementation of findings)________ Recommendation for further study ORGANIZATION AND STRUCTURE OF THE DOCUMENT1. Copyright Page2. Title Page3. Signature Page4. Abstract5. Dedication Page6. Acknowledgments7. Table of Contents8. List of Tables9. List of Figures10. Body text, divided into chapters designated by upper case Roman numerals11. References in the specified style manual format12. Appendices and supporting documents13. Human Subjects Review Approval document14. Author’s VitaTABLES/FIGURES1. Tables and/or figures should appear no more than one page from where they are first referenced2. Tables and/or figures may be placed in the appendices and referenced in the body text3. Tables and/or figures are identified by chapter and number. ( Example: Table 4.1 would be first table to appear in chapter 4)MARGIN SETTINGS:1. 1 ½’ Left margin and 1” inch top, bottom and right margin or other university set specificationsSPACING1. Double spaced throughout the document2. Indent each paragraph first line .05”PAPER1. 100 percent cotton, 20-pound bondFONT AND SIZE1. Arial, Bookman, Times New Roman or similar font recommended2. Size: Standard 12 fontPAGINATION1. Every Page should be assigned a number2. Preliminary pages, small Arabic numbers (i, ii, iii, iv …etc) in the center at bottom of each numbered page3. Abstract receives the first numbering at the bottom and in the center4. First page of each chapter should be in the center at the bottom of the page in the footer5. All other pages should have numbers in the upper right hand side of the page 67
  • Dissertation Web Resources:http://www.dissertation.com This site has a number of great tips, feature articles and amonthly newsletter related to the dissertation process.http://www.jsmusic.org.uk/students/dissertations/dissertations_checklist.html Thissite contains a valuable checklist for help with organizing and completing the document.http://www.gradresources.org/worksheets/gantt.htm This site contain a neat chartwith each component and a timeline to help guide you through the steps to completion.http://www.lib.duke.edu/libguide/plagiarism.htm This site defines and explainsplagiarism in detail along with the consequences for the act.http://www.lib.duke.edu/libguide/home.htm Duke university provides a great resourcefor selecting the topic and researching library resources on this quality website.http://frontpage.wiu.edu/~rlm119/writinglinks.html Dr. Marshall’s writing sitecontains a good set of links to assist with grammar, punctuation, style and other writingissues.http://frontpage.wiu.edu/~rlm119/apalinks.html Dr. Marshall’s APA site has a numberof good links to assist with APA in-text and reference list formatting.http://www.citationmachine.net Citation machine is a good tool to utilize in the questfor proper APA or MLA references.http://frontpage.wiu.edu/~rlm119/templates.html Dr. Marshall’s template site shouldsave you some time in formatting table of contents and other essential pages of thedocument.http://www.academicladder.com/dissertation/dissertation-coaching-help.htmAcademic ladder provides a free bi-weekly tips subscription to help conquer some of theproblems and issues that arise in writing the dissertation or thesis. 68
  • Chapter 11 General Statistics InformationKey Points1. Definitions of Statistics a. Statistics involves manipulations of numbers and conclusions based on these numbers. b. Statistics means to state numbers. c. Statistics is the study of numerical variation. d. Statistics is making decisions with incomplete data (without having all the numbers). e. Statistics is a numerical characteristic of a sample.2. Examples a. Agricultural statistics (acres, grain, water, and fertilizer) b. Medical statistics (types of drugs, amounts, and patients)3. Two Types of Statistics a. Descriptive Statistics 1) Summarizing or describing test scores (data) with numbers 2) Includes the mean, median, mode (Measures of Central Tendencies) b. Inferential Statistics 1) Definitions i. A method of reaching conclusions about unmeasurable populations using sample evidence and probability 69
  • ii. A method of taking chance factors into account when using samples to reach conclusions about populations 2) Most research is done with a sample. 3) When a sample is selected, there is a certain level of uncertainty. (A probability table is needed.) 4) Example 5 million 5th grade students (population) Teach using Method A 100 students Teach using Method B randomly selected (sample of above set) Mean (average) for students taught using Method A = 48 Mean (average) for students taught using Method B = 52 (Students were taught differently.)4. Population a. Definition: Consists of all members (scores) of a specific group b. The researcher selects his or her population. The following are examples: 1) All fifth graders in the United States 2) All fifth graders in Texas 3) All fifth graders in Waller County5. Sample a. Definition: A subset of a population b. Example 70
  • 1) Of five million fifth grade students (population), 100 students were randomly selected (sample). 60 male 40 female 2) students students [Each is a sub sample of the above 1]6. Parameter a. Definitions 1) A numerical characteristic of a population 2) A statistic of a population 3) A measurement of a population b. A constant7. Statistic a. Definitions 1) A numerical characteristic of a sample 2) A measurement of a sample b. A variable8. Experimental Design or Research Design a. Definition: Concerned with all the things that influence the numbers b. The way the researchers did their experiment may have influenced the outcome. c. Remember the definition of statistics – the manipulation of numbers and the conclusion based on these numbers. 71
  • 9. Variable a. Definition: Something that exists in more than one amount or form b. Examples 1) Height 2) Gender 3) Weight 4) Test scores i. I. Q. ii. IOWA iii. LEAP iv. ACT10. Types of Variables a. Independent variable: The treatment (selected by the researcher) (IV) b. Dependent variable: The observed results (in education, test scores) (DV) c. Extraneous variable: A variable other than the treatment (IV) that might affect the results (DV) d. Remember: IV (treatment) may or may not affect DV (results). e. Examples of treatment 1) Different book 2) Different teaching method 3) Male/female teachers 4) Experience of teachers 5) Time of day 72
  • Chapter 12 Types of Statistical DataKey Points1. Nonparametric Data: Data not normally distributed (Non-normal) – Discrete data - a. Nominal Data (Refers to things) 1) Just names something or someone 2) Examples i. Social security numbers ii. Phone numbers iii. I. D. number iv. Credit card number v. Home address vi. Bank account number 3. Nominal data are not very useful in research. Averages can’t be computed with this type of data. b. Ordinal Data (Refers to frequency) 1) Names and ranks (ranked data) 2) Numbers tell you relative positions or orders 3) Examples i. Class rank (1st, 2nd, 3rd, etc.) ii. Rank by height iii. Sports rank iv. Rank in a contest 4) More useful than nominal but still not that useful 5) Not exact 73
  • 6) Hides things 7) Intervals are not equal. 8) No math is involved 9) Ranking is not mathematical. 10) Can’t get an average rankExampleMrs. Smith thinks there is a correlation between how students rank in mathand science. Mrs. Smith’s classes _______________________________________________ Students Rank in Math Class Rank in Science Class Mary 5 4 Joey 3 5 Alice 4 2 Sam 1 3 Bob 2 1What does this “1” ranking really mean? We do not know how the class as awhole performed. It could mean this student scored 60/100. That is why it ismaintained that ordinal data (ranking) hides information.Instead of ranking, Mrs. Smith should use the actual test scores of studentsbecause they are more specific data.It is best not to use stanines either when comparing students. 1 2 3 4 5 6 7 8 9 Bottom top 4% 7% 12% 17% 20% 17% 12% 7% 4% 74
  • 2. Parametric Data: Data that are normal. (Continuous) 1) Interval Data 2) Names, ranks, and has equal intervals between numbers 3) Example: Temperature (i.e. Fahrenheit) 4) Cant get good mathematical data 5) Cant get a mathematical average 6) Has equal units of measurement 7) Many educational and psychological studies have been done using interval data. b. Ratio Data 1) Names, ranks, has equal intervals, and has a true zero point 2) Examples i. Height ii. Time iii. Distance iv. Some test scores (i.e. a teacher’s test) v. Speed vi. Weight vii. Income 3) Can compute mathematical operations 4) Can get an average 5) Can say something/someone is twice, three times, etc. as tall, fast, heavy, etc. 75
  • Scales of Different Types of Data Nonparametric Data (non-normal) (discrete data – just there) 1. Nominal 2. Ordinal Parametric Data (assumes normality) (continuous) 3. Interval Mathematical operations can be 4. Ratio computed with these types of data. ½ of the scores ½ of the scores averageAs you move farther from the average, the percentage gets smaller. 76
  • 77
  • Chapter 13 Descriptive Statistics (Summarize or describe test scores)1. Two Types of Descriptive Statistics a. Three Measures of Control Tendencies 1) Mean: arithmetic average 2) Median: midpoint in a distribution of scores arranged in ascending or descending order 3) Mode: the number in a data set that occurs most often 4) Examples i. X 78
  • m middle score (no score occurs more than any other) bimodal trimodal 30 Summation ofii. Y 79
  • T To obtain the median, Note: To obtain the median, find t take the average of the average of the two middle t the two middle numbers, 5 and 6. numbers n 5.5 = median +6 2 11.049 10 10 Summation of 10 1 80 7
  • 2. The mean is a mathematical entity because the operations involved in computing it are addition, multiplication, and division.3. Types of Distribution a. Normal Distribution b. Positively Skewed Distribution c. Negatively Skewed Distribution4. Characteristics of a Normal Distribution a. The bell curve is symmetrical. b. The highest point is the mean. c. The mean, median, and mode are located at the same place on the Bell curve. d. The mean, median, and mode are located at the 50th percentile. e. The scores cluster around the mean. As you move farther to the left or right, there are fewer and fewer scores. f. Half of the scores are above the mean, and half of the scores are below it. g. Most people score around the mean. h. The curve never touches the baseline and goes forever in both directions because it is a theoretical model. i. Example 57 58 58 59 59 59 60 60 60 60 61 61 61 62 62 63 81
  • on either side of 60; therefore, the The same amount of numbers are mean, median, and mode are located at the same place. Distribution Normal In a curve distribution, the slope represents the frequency of score When thousands of people are involved, scores tend to fall into a normal curve.5. Characteristics of a Positively Skewed Distribution a. The mean, median, and mode are not located at the same point. b. Outliners cause distortion and cause the mean to be pulled to the right. c. When the mean is pulled to the right, you have a positively skewed distribution. d. The mean is higher than the median. e. Example 57 58 58 59 59 60 60 60 61 61 62 69 The median is 60, however 69 is the outliner and causes the mean to be greater than the median. 82
  • 6. Characteristics of a Negatively Skewed Distribution a. The mean is to the left of the median. b. The mean is lower than the median. 6) Example 50 59 59 60 60 60 61 61 62 The median is 60, however 50 is the outliner and causes the mean to be lower than the median.7. Facts to Remember a. In a skewed distribution, the best indicator is the median because it does not move. b. When the mean is more than the median, it is a positive distribution. c. When the mean is less than the median, it is a negative distribution.8. The median is always the center.9. The mean can be pulled to the right or left.10. In skewed distributions, use the median to report a class average. Measures of Variability (also called Spread, Scatter, Dispersion, and Deviation) 1) Range: the highest score minus the lowest score a. If the range is small, the standard deviation will also be small. b. If the range is large, the standard deviation will also be large. 2)Sum of Squares: The sum of the squared units of deviation from the mean; the central mathematical point from which everything in parametric statistics is based around 83
  • 3) Variance: the average squared units of deviation from the mean 4) Standard Deviation: the average units of deviation from the mean 5) Symbols for… Population Sample SS SS σ2 S2 σ S for Mean Population Sample µ X or F or Y or Z or MSU (anything with a bar over it)Note: Always ask if you are computing the standard deviation for apopulation or a sample. The formula is slightly different. Conceptual Way (slow way) Raw Deviation Sum of Variance Standard Deviation Scores Mean from Mean Scores (for population) (for population) 84
  • Computational Way (fast way) ( Raw Variance Standard Deviation Scores ( (for population and sample) (for population) 2 − (∑ X )2Sum of Squares: ∑X N N 2 − (∑ X )2Variance: ∑X N N (for a population) (for a sample) 2 ( ∑ X )2 2 − (∑ X )2 ∑X − ∑XStandard Deviation: N N N N −1Central Measures and VariabilityDirections: Find all central measures (mean, median, and mode) of all distributions. Find all measures of variability (sum of squares, variance, and standard deviation) of distributions.1) 2) 3) 4) 11 12 13 14 85
  • 11 12 13 14 12 13 14 15 13 20 16 15 13 20 17 18 13 23 18 18 145) 6) 7) 8) 3 3 3 3 3 6 9 8 3 9 12 11 4 12 12 12 4 15 12 12 5 15 12 139) 10) 11) 2 4 1 4 4 1 6 4 10 86
  • Chapter 14 Types of DistributionsKey Points1. Mesakurtic Distribution a. This is a normal distribution. b. The curve is symmetrical. c. Example: 34.13% 34.13% 13.59% 13.59% 2.15% 2.15% .12% .12% (Standard Deviation) -4 -3 -2 -1 0 1 2 3 4 .1% 2.3% 15.9% 50% 84.1% 97.7% 99.9%2. Platykurtic Distribution 87
  • a. This distribution is basically flat. b. It has the most variability. c. Example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 183. Leptokurtic Distribution a. Practically everybody scores in the middle. b. This type of distribution has the least variability. c. There is no trend. (The trend is there is no trend.) 5 5 5 5 5 2 3 4 5 67 KURTOSIS IS THE TERM USED TO DESCRIBE THE VARIABILITY (SPREAD) OF A DISTRIBUTION. 88
  • Chapter 15 FORMULASName of Test Characteristics Formula  one sample  based on normal X −M z distribution z0 = σ  σ (population standard N deviation) is known  critical z is always 1.65 at .05 alpha level X −M  one or two t0 = S tailed N t  σ is unknown DF = N − 1  one sample X −YIndependent  two different  2  2 t independent  ∑ X 2 − ( ∑ X )  +  ∑Y 2 − ( ∑Y )  Test groups  N   N       no σ  no population n ( n − 1) mean DF = n1 + n2 − 2 89
  • Name of Test Characteristics Formula  pre and post X −Y Correlated tests (pairs) t= t  same group 2 − ( ∑ D) 2 ∑D Test  D=x− y N N −1 N DF = N − 1  measures the degree of ( ∑ X ) ( ∑Y ) Pearson’s r relation ∑ XY − N(Correlation) between two  ∑ X 2 − ( ∑ X )   ∑Y 2 − ( ∑Y )   2  2 variables  determines  N  N     the degree of linear relationsChapter 16 90
  • The Basics Understanding and Using Statistics1. The most common skill necessary for doing statistics is counting. For example: a.the number of days a student is present or absent b.the number of items correct or incorrect on a test c.the number of discipline referrals d.frequency of unacceptable or desirable behaviors e.the number of attempts required to master a skill2. The second most common skill used in statistics is measurement. For example, things we measure in education include: a.achievement of individuals or achievement gaps between groups b.aptitude c.interest d.skill level e.knowledge f.attitudes of teachers, parents, students toward specific thing g.opinions of various constituencies h.beliefs of important players in the organization i.level and type of motivation j.degree of improvement k.progress l.behaviors3. The most frequently applied mathematical operations in statistics include addition, subtraction, multiplication, and division. If you know how to count, measure, add, subtract, multiply, and divide, 91
  • then you ALREADY possess the skills necessary to do statistics.4. Many statistical concepts have become a part of our daily vocabulary. We use these concepts without thinking. For example: a. I am going to calculate the “average.” (statisticians call this the arithmetic mean or mean) b. She is above average. (statisticians say more precisely that her performance on a measurement was one, two or three standard deviations above the mean.) c. I am 99.9% sure. (statisticians call this p < .001 or confidence level; that is to say, these results were not due to accident or chance) d. That information seems a bit “skewed.” (statisticians say that the mean and median are not equal and that the distribution is positively or negatively skewed) e. There is a correlation between this and that. (statisticians say that there is a statistically significant relationship between this and that. The correlation is usually stated in numeric form, for example r=.34, p< .01)5. Established research designs and procedures for calculating and thinking about statistics already exist. All you have to do is learn the directions and follow them. Making your easier are the facts that: a. Research design tells you what data to gather. b. Statistical procedures and formula already exist and can be used for calculating your data. c. Statistical software such as the Statistical Package for Social Sciences (S.P.S.S.) and S.A.S. make the analysis of your data very systematic and complete including tables, graphs and charts. 1) SPSS is a quality software application for students in the initial stage of learning statistical analyses. In addition, SPSS is a low 92
  • cost resources to students and it provides professional statistical analysis and tools in a user friendly software environment for both MAC and PC users. A list of resources for learning SPSS is provided at the end of the chapter. 2) SAS is a more complex package with high levels of statistical analysis capabilities. SAS handles a wide variety of specialized functions for data analysis and procedures. This software package is utilized extensively in business, industry as well as educational settings. tools for both specialized and enterprise- wide analytical needs. SAS is provided for PC, UNIX, and mainframe computer platforms. A list of resources for learning SAS is provided at the end of the chapter.6. In a very short time you will realize that you can use your existing skills but will use them MORE skillfully when you do statistics. a. By counting, measuring, comparing, and examining relationships of the RIGHT things you will be able to skillfully analyze data and draw accurate and MEANINGFUL conclusions. b. You will learn to use your findings and conclusions to make better informed educational decisions. 93
  • Web Resources for SPSS• http://www.utexas.edu/its/rc/tutorials/stat/spss/spss1/index.html• http://www.ats.ucla.edu/STAT/mult_pkg/whatstat/default.htm• http://www.stat.tamu.edu/spss.php• http://www.spsstools.net/spss.htm• http://cs.furman.edu/rushing/mellonj/spss1.htm• http://www.ats.ucla.edu/stat/spss/examples/default.htm• http://www.psych.utoronto.ca/courses/c1/spss/toc.htm• http://www.ats.ucla.edu/stat/spss/modules/default.htm• http://data.fas.harvard.edu/projects/SPSS_Tutorial/spsstut.shtml• http://www.cas.lancs.ac.uk/short_courses/intro_spss.html• http://www.cas.lancs.ac.uk/short_courses/notes/intro_spss/session1.pdf• http://www.bris.ac.uk/is/learning/documentation/spss-t2/spss-t2.pdf• http://calcnet.mth.cmich.edu/org/spss/toc.htm• http://www.indiana.edu/~statmath/stat/spss/• http://dl.lib.brown.edu/gateway/ssds/SPSS%202%20Hypothesis%20Testing %20and%20Inferential%20Statistics.pdf• http://dl.lib.brown.edu/gateway/ssds/SPSS1%20Finding%20and %20Managing%20Data%20for%20the%20Social%20Sciences.pdf• http://www.shef.ac.uk/scharr/spss/index2.htm Web Resources for SAS• http://www.itc.virginia.edu/research/sas/training/v8/• http://www.ats.ucla.edu/stat/sas/sk/• http://www.ssc.wisc.edu/sscc/pubs/stat.htm• http://web.fccj.org/~jtrifile/SAS2.html• http://www.utexas.edu/cc/stat/tutorials/sas8/sas8.html• http://www.ats.ucla.edu/stat/sas/modules/• http://www.psych.yorku.ca/lab/sas/• http://instruct.uwo.ca/sociology/300a/SASintro.htm• http://web.utk.edu/~leon/jmp/• http://www.stat.unc.edu/students/owzar/stat101.html 94
  • Chapter 17 Getting Started With Research: Avoiding the PitfallsAny of the following mistakes can prevent a study from getting off theground or being carried out to completion. Avoid these mistakes by listeningto the voice of experienced professors when they tell you to modify yourstudy. Consider the following mistakes and the proposed solutions.1. Research is conducted with conflicting purposes or research questions that do not match your stated purpose. Research efforts may halt due to the confusion. Solution: Write the purpose and research questions with clarity and simplicity. Allow expert writers to critique your work and take their suggestions seriously.2. Researcher fails to distinguish between the practical problem and the research problem. She may try to save the whales with her study when a better understanding of the problems that endanger the whales is needed. The study may prove too unwieldy to complete. The goal is may be too grandiose to be unattainable. Solution: Map out the entire research agenda necessary to address a practical problem then carefully carve out for your own study the part that is most significant and workable. Remember that your goal is to finish.3. Researcher attempts to make the study overly complex when a simpler design would yield equally useful information. The study may become unwieldy and may obfuscate rather than shed light on the subject. Solution: Examine all research questions included in your study and rank them in order of the significance and usefulness. If any data do not help fulfill the purpose of your study, then these should be dropped so that the other areas can stand out. 95
  • 4. Researcher attempts to define the problem and purpose of the study without first engaging in an extensive reading of all relevant literature. This results in a superficial or naïve study that is not very useful. Solution: Read everything you can get your hands on systematically sort the types of studies and conceptual areas. Your study will take on a well- informed vision of what more needs to be known.5. Researcher defines the problem and purpose of the study without first seeking the counsel of experts who are knowledgeable about the subject. Once completed, the study may lack credibility with practitioners. Solution: Spend a great deal of time talking to practitioners about the problems they face when dealing with the issues that you are interested in writing about. Let them provide you with an expert perspective as you seek to define the problem and purpose of your study.6. Researcher uses methodologies that he does not understand well. If thedesign is inappropriate to the purpose of the study or the form of the data iswrong, he may be unable to interpret the data or complete the study. Solution: Consult statistics and research design experts regarding your goals as a researcher. Take courses that you need to become proficient in the specific methodologies that you wish to apply to your study.7. The methodology or the title of the study drives the study rather than the purpose. When a study driven primarily by methodology, the purpose and significance are diminished to make the study easier to complete. This may result in a less significant or useful study. Solution: Do not title your work until you understand the research problem well and the purpose that your study will reflect. Avoid selecting a cool sounding methodology until you are certain that there it will help you answer the specific things that you need to know.8. Catchy phrases or terms are used to define the purpose and problem while little attention is paid to the significance of a study. Study may be well done, or even interesting, but may not be very useful. 96
  • Solution: The significance of a study can mean the difference in whether the study is published or whether it actually is read. Understand who the intended audience of a study may be and try to address their interests and needs and particularly what they need to know.9. Study is not sufficiently delineated and limited so that the time or effort required to complete the study becomes overwhelming. Solution: Listen to your professors when they tell you the study may take a lot longer if it is not narrowed down. Provide a “recommendations for further research” section in your work so that extraneous matters may be addressed in the future by you or other researchers. 97
  • Chapter 18 Ethics and Research1. Responsible conduct guiding researchers. Universities, federal and state government as well as professional organizations have guidelines on ethical behavior and research.2. Informed consent - Participants must be informed and voluntarily give their consent to participate in a study. - Participants must be fully informed about all procedures and possible risks. - Participants informed of purpose of research and how data will be used. - Benefits of study. - Alternative treatments and potential compensation. - They must understand and arrive at a decision without coercion. (Voluntary participation) - Starts before the research begins. - Privacy and confidentiality of research subjects and data . - Contacts - Approval of the IRB (Internal Review Board)3. Termination of research if harm is likely. Risk-benefit assessments.4. Special protection for vulnerable populations of research subjects.5. Equitable recruitment of participants.6. Results should be for the good of society and unattainable by any other means.7. Beneficence - To promote understanding and shed light on the human condition. Protection of those participating in the study.8. Honesty - No data to be suppressed, data should be reported as collected. 98
  • 9. Misconduct - Fabrication - Falsification - PlagiarismSUGGESTED STUDENT ACTIVITIES:1. In small groups discuss the relationship between academic freedom andresearch ethics. Share your discussion with the entire class.2. What steps should researchers take to ensure all areas of informed consentare addressed in their research study? Share your discussion with the class.3. What steps would you take to make sure you are not involved in unethicalconduct in research? Share your discussion with the class.WEBSITESAPAs Research “Ethics and Regulation”http://www.apa.org/science/research.htmlNational Institutes of Health (NIH) “Bioethics Resources”http://www.nih.gov/sigs/bioethics/index.htmlResearch Ethicshttp://faculty.ncwc.edu/toconnor/308/308lect10.htmThe National Institutes of Health (NIH) "Human Participants ProtectionsEducation for Research Teams”http://ethics.od.nih.gov/The Department of Health and Human Services (DHHS) Office of ResearchIntegrity http://www.ori.hhs.gov/ 99
  • Chapter 19 Ethics in Research on Human Subjects and the Role of the Institutional Review Board Frequently Asked Questions1. What is an IRB? The IRB is a committee that is assigned the task of reviewing proposed research by a university or other institution that receives federal funds and is in the business of conducting research on human subjects. The IRB is required by part 46 of Title 45 of the Code of Federal Regulations also called 45 CFR 46. According to the Department of Health and Human Services, it is the responsibility of the IRB to recommend to university officials that proposed research either be approved or disapproved based on a set of rules called the Common Rule.2. Why do we have IRBs? Every institution that conducts research on human subjects that also receives federal funds must provided a formal mechanism for ensuring that research is conducted in a manner that reflects nationally recognized standards. Failure to comply with policy can place the researcher and his institution at risk for litigation. In some a few instances the federal government has temporarily suspended all research activities at key research universities for failure to comply with the law.3. What is the Common Rule The Common Rule was established in 1991 in federal law 45 CFR 46.112. It details all of the areas of compliance with accepted norms for conducting research on human subjects established by the Helsinki Agreement and a series of declarations referred to as the Belmont Report. These principles re detailed in the Common Rule. These include: 100
  • a. informed consent b. protection of confidentiality or anonymity of all human subjects c. acknowledging the right of the subject not to participate in a study d. ensuring that subject is aware of his or her right to discontinue the study at any time without adverse consequence e. ensuring that the study provides a benefit to the community f. ensuring that the study has a direct benefit for the subject participating in the study g. ensuring that the subject is aware of the risks involved in the study h. ensuring that the researcher has found less invasive or intrusive ways to obtain the same information i. that the individual subject has given permission to be deceived during an experimental study j. that parents have granted permission for children under the age of 18 to participate k. that any psychological or physical harms will be remedied with expenses paid by the researchers. l. the researcher is protected from possible harms or is taking informed risks m. specific measures for achieving each of the above has been spelled out n. that theses measures are meticulously followed.4. Are all studies subject to IRB approval? No. However all studies that will involve gathering data from the public or that will be published in some form must be reviewed before university officials will approve the protocol. To accommodate social science research and historical research expedited review protocols are submitted. Studies that must be reviewed meet the following criteria: a. the results will be published b. the study involves experimentation on human subjects c. the study is invasive or intrusive in some way d. the study involves deception e. there are possible risks to the subject f. there may be no community benefit or direct benefit for the subject g. there is a possible conflict of interest by researchers in the study h. medical or mental health research 101
  • 5. When my study has been approved by the IRB, are there any additional requirements that researchers must follow? Yes. The Common Rule states that research approved by an IRB may be subject to further review for approval or disapproval by officials of the institution under the following circumstances: a. if a third party complains of possible wrong-doing or harms realized b. a senior administrator at the university may raise questions that would result in a follow-up IRB review. 102
  • Chapter 20 Working with the IRB Suggested Frame of Mind for ResearchersThe following suggestions are based on the assumption that the researcherand the Institutional Research Board (IRB) regulator find themselves oncommon ground – partners in learning to cooperative in improving researchand its ethical oversight. 1. Become an expert in the ethical issues surrounding your specific research purpose, related questions, and methodology. Not all studies require the same degree of IRB monitoring. 2. Become an expert in the ethical standards for research in your academic discipline. Carefully worded research proposals may allow IRB regulators to approve it without incident. 3. Become an expert in the IRB process of your institution. Examine how each part of the IRB protocol or checklist relates to the ethical issue of your particular study, methodology, and academic discipline. 4. Get to know your IRB members personally. Don’t wait until you submit your proposal or go to the IRB meeting to discover who they are. 5. Assume that IRB members want to do a good job. Empathize with them as you would someone who is in training for a new job. 6. Continue to conduct occasional conversations with IRB members after your proposal has been approved. Over time IRB members will come to view your research proposals with greater confidence. 7. Before IRB meetings listen carefully to IRB members talk to you about research and ethics. Be prepared in non-public, non-confrontational ways to share your concerns regarding their statements or written comments. 103
  • These guidelines can help you get off to a good start without cynicism orfrustration. A positive working relationship with the IRB can promote goodprofessional health within your research community.IRB RESOURCES ON THE WEB:http://en.wikipedia.org/wiki/Institutional_Review_Board This sited definesthe purpose and premises for ethics in research along with the basis for reviewingand monitoring behavioral research involving human subjects.http://www.irbforum.org/ This site provides support and a forum for discussingethical, regulatory and policy issues related to human subjects researchhttp://www.northshorelij.com/body.cfm?id=5545&plinkID=5096 This siteprovides a good IRB Map to assist in decision regarding submission of an IRB. 104
  • Chapter 21 RESEARCH, WRITING & PUBLICATION1. Brainstorm ideas for research and possible publication. - Look at current journals to see what is current or a “hot” topic. Many also have a “Call for Papers” listing the topics they plan to publish in future editions. - Ask professional educational organizations what topics are popular or important issues in their field of education. - Think about what interests you. You have to live with the topic until you complete it. If you are not interested in the topic, it will become boring or be difficult to keep on task and complete. - Find out if a colleague or another person in the field of education has a project, interest, etc. that you could work on with them. - Find out if a textbook company is looking for someone to write a chapter in a textbook. These might be on their website or they might send an email to those on their listserve.2. Determine the type of manuscript you want to write. (NOTE: You are working on a manuscript. Many people call or interchange the term article for manuscript. A MANUSCRIPT is work that is submitted for possible publication. An ARTICLE is a manuscript that has been published.) - Objective survey of the literature available on a topic - Analysis of literature to support the author’s viewpoint - Interpretive paper on a specific theory, concept, etc. - Theory paper that develops a new conceptual framework - Research paper - describing the study, participants, results, conclusions, etc. - Chapter for a textbook (They are the easiest to be accepted since they do not have to go through a blind peer-review process) - Other types of papers as indicated in the professional journals you read 105
  • 3. Its also important to know what types of manuscripts a journal typically publishes. - The library should have current issues for your review. Many can be found online. - Review the types of article in several issues of the journal. Do they accept a variety of topics for publication or do they have a theme for the issue? - Read the submission or author guidelines. Many can be found online. - Look at the expertise of the members of the editorial board for ideas on their research interests. 4. The acceptance rates of journals can range from 80% to 5%. Look at publishing in journals where the turnaround time may be shorter. Journals which have very high submission rates have high rejection rates. Look at using your time wisely. Don’t “tie up” an article for 18 months if the journal has a low acceptance rate. 5. Ask colleagues which journals they have submitted manuscripts to. They can give good advice on the “where to” and “where not to” for submissions. 6. Determine which journal you will submit your manuscript. It is important to know where you are going to know how to begin the writing process. It is like taking a trip. You can have a well organized vacation by using a map or a “fly by the seat of your pants” experience without the map. You save time, energy and have a greater chance for successful publication by knowing where you are going. (Remember research ethics. Only submit your manuscript to one journal at a time. You can submit to another journal if you receive notice that your manuscript will not be published by the editor.) 7. When possible, collaborate in writing! A group of two or more can share ideas and the work. - Decide on the topic - Decide the role and responsibility of each team member. (Use each other’s talents. Some are better at writing, others at finding the references, others at editing, etc.) - Set timelines - Meet on a regular basis to keep each other on task, and make changes as needed. 106
  • 8. Schedule a time to write every day. Make it automatic! Thirty to ninety minutes a day, or at least three times a week. This will help you to stay on target and not get overwhelmed at the last minute when your writing project is due.9. Develop an outline for your manuscript. You can read the published articles in the journal where you plan to submit and determine what type of outline to develop.10.Write your introduction and summary first. Most problems are found in these sections. They become a guide to your manuscript (a roadmap)! It will keep you focused on the route you are taking.11.As you write make sure the manuscript indicate you know what is current on that topic. Make sure to have at least one to two references from the same year you plan to submit your manuscript.12.Make sure your manuscript has a solid conceptual basis.13.Make sure that findings in your conclusion have been substantiated in your paper.14.When the paper is well organized and near completion have a couple of colleagues review and edit it. - Does it make sense to someone else who has read it? - Does it follow the publication style? (APA, Chicago, MLA, etc.)15.Tips for submitting your manuscript after it is completed: - Make sure you have the exact copies required. - Write a cover letter with the current editor’s name. - The cover letter should be neat and a brief description of your manuscript, why you are submitting it and your contact information. - If an online submission, are all guidelines for submission followed? - If mailing the manuscript, make sure you have the post office weigh the envelope so you can buy the correct amount for postage. 107
  • 16.Most editors will document they have received your manuscript through a letter or email. If you do not receive a letter within a couple of weeks documenting that your manuscript was received then call or email the editor to check to see if the manuscript was received. Remember FedEX trucks and mail trucks have crashed and hurricanes have damaged mail. Sometimes forces of nature and accidents do cause a manuscript to fall by the wayside.17.If you get an acceptance letter, GREAT JOB!! If you receive a letter indicating the manuscript was not accepted for publication. review the editorial comments. - Revise and resubmit if the editor indicates this should be done. - If you have questions about the comments made by reviews, contact the editor and ask them for clarification. - Ask the editor if they have a suggestion for another journal that might be more appropriate. - Revise and look at other potential journals for possible publication. - Don’t worry, your manuscript might not have been the “right fit” for that journal or the right time to be submitted there. - Sometimes a journal receives several manuscripts on the same topic. The topic might be saturated. Look for another journal to submit the manuscript. - Take heart that everyone will get some “rejection” letters. One of your authors had that experience four times on her first manuscript. Although I kept writing other manuscripts and those were being accepted, the first one was rejected four times. On the fifth submission it was published. NEVER GIVE UP, JUST KEEP SEARCHING FOR THE RIGHT JOURNAL. 108
  • PART II:Fundamental Terms inEducational Researchand Basic Statistics 109
  • Fundamental Terms in Educational Research and Basic StatisticsA priori codes – codes developed before examining the current dataA-B-A design – a single-case experimental design in which the response tothe experimental treatment condition is compared to baseline responsestaken before and after administering the treatment conditionA-B-A-B design – an A-B-A design that is extended to include thereintroduction of the treatment conditionAccessible population – the research participants available for participationin the researchAchievement tests – tests designed to measure the degree of learning thathas taken place after being exposed to a specific learning experienceAcquiescence response set – tendency to either agree or to disagreeAction research – applied research focused on solving practitioner’sproblemsAlternative hypothesis – statement that the population parameter is somevalue other than the value stated by the null hypothesisAmount technique – manipulating the independent variable by giving thevarious comparison groups different amounts of the independent variable.Analysis of covariance – used to examine the relationship between onecategorical independent variable and one quantitative dependent variablecontrolling for one or more extraneous variables; it’s a statistical method thatcan be used to statistically “equate” groups that differ on a pretest or someother variableAnalysis of variance – see one-way analysis of variance 110
  • Anchor – a written descriptor for a point on a rating scaleAnonymity – keeping the identity of the participant from everyone,including the researcherApplied research – research about practical questionsAptitude tests – tests that focus on information acquired through theinformal learning that goes on in lifeArchived research data – data originally used for research purposes andthen storedAxial coding – the second stage in grounded theory data analysisBack stage behavior – what people say and do only with their closestfriendsBar graph – a graph that uses vertical bars to represent the dataBaseline – the behavior of the participant prior to the administration of atreatment conditionBasic research – research about fundamental processesBoolean operators – words used to create logical combinationsBracket – to suspend your preconceptions or learned feelings about aphenomenonCarryover effect – a sequencing effect that occurs when performance in onetreatment conditions is influenced by participation in a prior treatmentcondition(s)Case – a bounded systemCase study research – research that provides a detailed account andanalysis of one or more casesCategorical variable – a variable that varies in type or kind 111
  • Causal modeling – a form of explanatory research where the researcherhypothesizes a causal model and then empirically tests the model. Alsocalled structural equation modeling or theoretical modeling.Causal-comparative research – a form of non-experimental research wherethe primary independent variable of interest is categoricalCause and effect relationship – when one variable affects another variableCell – a combination of two or more independent variables in a factorialdesignCensus – a study of the whole population rather than a sampleChanging-criterion design – a single-case experimental design in which aparticipant’s behavior is gradually altered by changing the criterion forsuccess during successive treatment periodsChecklist – a list of response categories that respondents check ifappropriateChi square test for contingency tables – statistical test used to determine ifa relationship observed in a contingency table is statistically significantCIJE – an annotated index of articles from educational journalsClosed-ended question – a question that forces participants to choose aresponseCluster -- a collective type of unit that includes multiple elementsCluster sampling – type of sampling where clusters are randomly selectedCo-occurring codes – sets of codes that partially or completely overlapCoding – marking segments of data with symbols, descriptive words, orcategory namesCoefficient alpha – a variant of the Kuder-Richardson formula that providesan estimate of the reliability of a homogenous test 112
  • Cohort – any group of people with a common classification or commoncharacteristicCohort study – longitudinal research focusing specifically on one or morecohortsCollective case study – studying multiple cases in one research studyComplete participant – researcher becomes member of group being studiedand does not tell members they are being studiedComplete observer – researcher observes as an outsider and does not tellthe people they are being observedComprehensive sampling – including all cases in the research studyConcurrent validity – validity evidence obtained from assessing therelationship between test scores and criterion scores obtained at the sametimeConfidence interval – a range of numbers inferred from the sample that hasa certain probability of including the population parameterConfidence limits – the endpoints of a confidence intervalConfidentiality – not revealing the identity of the participant to anyoneother than the researcher and the researcher’s staffConfounding variable – an extraneous variable that systematically varieswith the independent variable and also influences the dependent variableConstant – a single value or category of a variableConstant comparative method – data analysis in grounded theory researchConstruct validity – evidence that a theoretical construct can be inferredfrom the scores on a testConstruct – an informed, scientific idea developed or “constructed” todescribe or explain behavior 113
  • Content validity – a judgment of the degree to which the items, tasks, orquestions on a test adequately sample the domain of interestContextualization – the identification of when and where an event tookplaceContingency table – a table displaying information in cells formed by theintersection of two or more categorical variablesControl group – the group that does not receive the experimental treatmentconditionConvenience sampling – people who are available or volunteer or can beeasily recruited are included in the sampleConvergent evidence – evidence that the scores on prior tests and thecurrent test designed to measure the same construct are correlatedCorrelation coefficient – an index indicating the strength and direction ofrelationship between two variablesCorrelational research – a form of non-experimental research where theprimary independent or predictor variable of interest is quantitativeCorroboration – comparing documents to each other to determine whetherthey provide the same information or reach the same conclusionCounterbalancing – administering the experimental treatment conditions toall comparison groups, but in a different orderCriterion of falsifiability – statements and theories should be “refutable”Criterion-related validity – a judgment of the extent to which scores froma test can be used to predict or infer performance in some activityCritical case sampling – selecting what are believed to be particularlyimportant casesCronbach’s alpha – see coefficient alpha 114
  • Cross-sectional research – data are collected at a single point in timeCulture – a system of shared beliefs, values, practices, perspectives, folkknowledge, language, norms, rituals, and material objects and artifacts thatthe members of a group use in understanding their world and in relating toothersData set – a set of dataData triangulation – the use of multiple data sourcesDebriefing – a post study interview in which all aspects of the study arerevealed, any reasons for deception are explained, and any questions theparticipant has about the study are answeredDeception – misleading or withholding information from the researchparticipantDeductive reasoning – drawing a specific conclusion from a set of premisesDeductive method – a top down or confirmatory approach to scienceDehoaxing – informing participants about any deception used and thereasons for its useDeontological approach – an ethnical approach that says ethical issuesmust be judged on the basis of some universal codeDependent variable – a variable that is presumed to be influenced by one ormore independent variablesDescription – attempting to describe the characteristics of a phenomenonDescriptive validity – the factual accuracy of an account as reported by theresearcherDescriptive research – research focused on providing an accurate descriptionor picture of the status or characteristics of a situation or phenomenon 115
  • Descriptive statistics – division of statistics focused on describing,summarizing, or making sense of a particular set of dataDesensitizing – reducing or eliminating any stress or other undesirablefeelings the participant may have as a result of participating in the study.Determinism – the assumption that all events have causesDiagnostic tests – tests designed to identify where a student is havingdifficulty with an academic skillDiagramming – making a sketch, drawing, or outline to show howsomething works or to clarify the relationship between the parts of a wholeDifferential attrition – when participants do not drop out randomlyDifferential influence – when the influence of an extraneous variable isdifferent for the various comparison groupsDirect effect – the effect of the variable at the origin of an arrow on thevariable at the receiving end of the arrowDirectional alternative hypothesis – an alternative hypothesis that containseither a “greater than” sign or a “less than” signDiscriminant evidence – evidence that the scores on the newly developedtest are not correlated with the scores on tests designed to measuretheoretically different constructsDisproportional stratified sampling – type of stratified sampling where thesample proportions are made to be different from the population proportionson the stratification variableDouble negative – a sentence construction that includes two negativesDouble-barreled question – a question that combines two or more issues orattitude objectsDuplicate publication – publishing the same data and results in more thanone journal or in other publications 116
  • Ecological validity – the ability to generalize the study results acrosssettingsEffect size indicator – a statistical measure of the strength of a relationshipElement – the basic unit that is selected from the populationEmic term – a special word or term used by the people in a groupEmic perspective – the insider’s perspectiveEmpirical – based on observation or experienceEmpiricism – idea that knowledge comes from experienceEnumeration – the process of quantifying dataEqual probability selection method – any sampling method where eachmember of the population has an equal chance of being selectedEquivalent-forms reliability – a measure of the consistency of a group ofindividuals’ scores on two equivalent forms of a test measuring the sameconstructERIC – a database containing information from CIJE and RIEEssence – the invariant structure of the experienceEthical skepticism – an ethical approach that says concrete and inviolatemoral codes cannot be formulatedEthnocentrism – judging people from a different culture according to thestandards of your own cultureEthnography – the discovery and comprehensive description of the cultureof a group of people; it’s a form of qualitative research focused ondescribing the culture of a group of peopleEthnohistory – the study of the cultural past of a group of people 117
  • Ethnology – the comparative study of cultural groupsEtic term – outsider’s words or special words that are used by socialscientistsEtic perspective – an external, social scientific view of realityEvaluation – determining the worth, merit, or quality of an evaluationobjectEvent sampling – observing only after specific events have occurredExhaustive categories – a set of categories that classify all of the relevantcases in the dataExhaustive – property that response categories or intervals include allpossible responsesExpectancy data – data illustrating the number or percentage of people thatfall into various categories on a criterion measureExperiment – an environment in which the researcher objectively observesphenomena that are made to occur in a strictly controlled situation in whichone or more variables are varied and the others are kept constantExperimental group – the group that receives the experimental treatmentconditionExperimental control – eliminating any differential influence of extraneousvariablesExperimenter effect – the unintentional effect that the researcher can haveon the outcome of a studyExplanation – attempting to show how and why a phenomenon operates asit doesExplanatory research – testing hypotheses and theories that explain howand why a phenomenon operates as it does 118
  • Exploration – attempting to generate ideas about phenomenaExtended fieldwork – collecting data in the field over an extended period oftimeExternal validity – the extent to which the study results can be generalizedto and across populations of persons, settings and timesExternal criticism – determining the validity, trustworthiness, orauthenticity of the sourceExtraneous variable – A variable that may compete with the independentvariable in explaining the outcome; any variable other than the independentvariable that may influence the dependent variableExtreme case sampling – identifying the “extremes” or poles of somecharacteristic and then selecting cases representing these extremes forexaminationFacesheet codes – codes that apply to a complete document or caseFactor analysis – a statistical procedure that identifies the minimum numberof “factors,” or dimensions, measured by a testFactorial design – based on a mixed model – a factorial design in whichdifferent participants are randomly assigned to the different levels of oneindependent variable but all participants take all levels of anotherindependent variable; it’s a design in which two or more independentvariables are simultaneously studied to determine their independent andinteractive effects on the dependent variableFieldnotes – notes taken by the observerFilter question – an item that directs participants to different follow-upquestions depending on the responseFocus group – a moderator leads a discussion with a small group of peopleFormative evaluation – evaluation focused on improving the evaluationobject 119
  • Frequency distribution – arrangement where the frequencies of eachunique data value is shownFront stage behavior – what people want or allow us to seeFully anchored rating scale – all points are anchored on the rating scaleGeneral linear model – a mathematical procedure that is the “parent” ofmany statistical techniquesGeneralize – making statements about a population based on sample dataGoing native – identifying so completely with the group being studied thatyou can no longer remain objectiveGrounded theory – a general methodology for developing theory that isgrounded in data systematically gathered and analyzed; a qualitativeresearch approachGroup moderator -- the person leading the focus group discussionGroup frequency distribution – the data values are clustered or groupedinto separate intervals and the frequencies of each interval is givenHeterogeneous – a set of numbers with a great of variabilityHistorical research – the process of systematically examining past eventsor combinations of events to arrive at an account of what happened in thepast History – any event, other than a planned treatment event that occursbetween the pre- and post measurement of the dependent variable andinfluences the post measurement of the dependent variableHolistic description – the description of how members of groups make up agroupHomogeneity – in test validity, refers to how well a test measures a singleconstructHomogeneous sample selection – selecting a small and homogeneous caseor set of cases for intensive study 120
  • Homogeneous – a set of numbers with little variabilityHypothesis – a prediction or educated guessHypothesis – a prediction or guess of the relation that exists among thevariables being investigatedHypothesis testing – the branch of inferential statistics concerned with howwell the sample data support a null hypothesis and when the null hypothesiscan be rejected In-person interview – an interview conducted face to faceIndependent variable – a variable that is presumed to cause a change inanother variableIndirect effect – an effect occurring through an intervening variableInductive reasoning – reasoning from the particular to the generalInductive codes – codes generated by a researcher by directly examining thedataInductive method – a bottom up or generative approach to scienceInferential statistics – division of statistics focused on going beyond theimmediate data and inferring the characteristics of population based onsamplesInferential statistics – use of the laws of probability to make inferences anddraw statistical conclusions about populations based on sample dataInfluence – attempting to apply research to change behaviorInformal conversational interview – spontaneous, loosely structuredinterviewInstrumental case study – interest is in understanding something moregeneral than the particular caseInstrumentation – any change that occurs in the way the dependent variableis measured 121
  • Intelligence – the ability to think abstractly and to learn readily fromexperienceInter-scorer reliability – the degree of agreement between two or morescorers, judges, or ratersInteraction with selection – occurs when the different comparison groupsare affected differently by one of the threats to internal validityInteraction effect – when the effect of one independent variable depends onthe level of another independent variableIntercoder reliability – consistency among different codersInterim analysis – the cyclical process of collecting and analyzing dataduring a single research studyInternal consistency – the consistency with which a test measures a singleconstructInternal validity – the ability to infer that a causal relationship existsInternal criticism – the reliability or accuracy of the information containedin the sources collectedInternet – a network of millions of computers joined to promotecommunicationInterpretive validity – accurately portraying the meaning given by theparticipants to what is being studiedInterrupted time-series design – a design in which a treatment condition isassessed by comparing the pattern of posttest responses obtained from asingle group of participantsInterval scale – a scale of measurement that has equal intervals of distancesbetween adjacent numbersIntervening variable – a variable occurring between two other variables ina causal chain 122
  • Interview – a data collection method where interviewer asks intervieweequestionsInterview guide approach – specific topics and/or open-ended questionsare asked in any orderInterview protocol – data collection instrument used in an interviewInterviewee – the person being asked questionsInterviewer – the person asking the questionsIntracoder reliability – consistency within a single individualIntrinsic case study – interest is in understanding a specific caseInvestigator triangulation – the use of multiple investigators in collectingand interpreting the dataIRB – the institutional review committee that assesses the ethicalacceptability of research proposalsItem stem – the set of words forming a question or statement Kuder-Richardsonformula 20 – a statistical formula used to compute an estimate of thereliability of a homogeneous testLaboratory observation – observation done in a lab or other setting set upby the researcherLeading question – a question that suggests a researcher is expecting acertain answerLevel of confidence – the probability that a confidence interval to beconstructed from a random sample will include the population parameterLife-world – an individual’s inner world of immediate experienceLikert scale – a summated rating scaleLine graph – a graph that relies on the drawing of one or more lines 123
  • Loaded question – a question containing loaded or emotionally chargedwordsLogic of significance testing – understanding and following the logicalLongitudinal research – data are collected at multiple time points andcomparisons are made across timeLow-inference descriptors – description phrased very close to theparticipants’ accounts and the researchers’ field notesLower limit – the smallest number on a confidence intervalMain effect – the effect of one independent variableManipulation – an intervention studied by an experimenterMargin of error – one half of the width of a confidence intervalMaster list – a list of all the codes used in a research studyMaturation – any physical or mental change that occurs over time thataffects performance on the dependent variableMaximum variation sampling – purposively selecting a wide range ofcasesMean – the arithmetic averageMeasure of relative standing – provides information about where a scorefalls in relation to the other scores in the distribution of dataMeasure of central tendency – the single numerical value that isconsidered the most typical of the values of a quantitative variableMeasure of variability – a numerical index that provides information abouthow spread out or how much variation is presentMeasurement – the act of measuring by assigning symbols or numbers tosomething according to a specific set of rules 124
  • Median – the 50th percentileMedian location – the numerical place where you can find the median in aset of order numbersMediating variable – an intervening variableMemoing – recording reflective notes about what you are learning from thedataMental Measurements Yearbook – one of the primary sources ofinformation about published testsMeta-analysis – a quantitative technique used to integrate and describe theresults of a large number of studiesMethod of working hypotheses – attempting to identify all rivalexplanationsMethod of data collection – technique for physically obtaining data to beanalyzed in a research studyMethods triangulation – the use of multiple research methodsMixed purposeful sampling – the mixture of more than one samplingstrategyMode – the most frequently occurring numberModerator variable – a variable involved in an interaction effect; seeinteraction effectMortality – A differential loss of participants from the various comparisongroupsMultigroup research design – a research design that includes more thanone group of participantsMultimethod research – the use of more than one research method 125
  • Multiple operationalism – the use of several measures of a constructMultiple regression – regression based on one dependent variable and twoor more independent variablesMultiple time-series design – an interrupted time-series design thatincludes a control group to rule out a history effectMultiple-baseline design – a single-case experimental design in which thetreatment condition is successively administered to different participants, orto the same participant in several settings, after baseline behaviors have beenrecorded for different periods of timeMultiple-treatment interference -- occurs when participation in onetreatment condition influences a person’s performance in another treatmentconditionMutually exclusive – property that categories or intervals do not overlapMutually exclusive categories – a set of categories that are separate ordistinctn – the recommended sample sizeN – the population sizeNaturalistic observation – observation done in “real world” settingsNaturalistic generalization – generalizing based on similarityNegative criticism – Establishing the reliability or authenticity and accuracyof the content of the documents and other sources used by the researcherNegative case sampling – selecting cases that disconfirm the researcher’sexpectations and generalizationsNegative correlation – two variables move in opposite directionsNegative-case sampling – locating and examining cases that disconfirm theresearcher’s expectations 126
  • Negatively skewed – skewed to the leftNetwork diagram – a diagram showing the direct links between variablesor events over timeNominal scale – a scale of measurement that uses symbols or numbers tolabel, classify, or identify people or objectsNondirectional alternative hypothesis – an alternative hypothesis thatincludes the “not equal to” signNormal distribution – a unimodal, symmetric, bell-shaped distribution thatis the theoretical model of many variablesNorms – the written and unwritten rules that specify appropriate groupbehaviorNull hypothesis – a statement about a population parameterNumerical rating scale – a rating scale with anchored endpointsObservation – unobtrusive watching of behavioral patternsObserver-as-participant – researcher spends limited amount of timeobserving group members and tells members they are being studiedOfficial documents – anything written or photographed by an organizationOne-group pretest-posttest design – a research design in which a treatmentcondition is administered to one group of participants after pretesting, butbefore posttesting on the dependent variableOne-group pretest-posttest design – administering a posttest to a singlegroup of participants after they have been given an experimental treatmentconditionOne-group posttest-only design – administering a posttest to a single groupof participants after they have been given an experimental treatmentcondition 127
  • One-stage cluster sampling – a set of clusters is randomly selected and allof the elements in the selected clusters are included in the sampleOne-way analysis of variance – statistical test used to compare two or moregroup meansOpen coding – the first stage in grounded theory data analysisOpen-ended question – a question that allows participants to respond intheir own wordsOperationalism – representing constructs by a specific set of steps oroperationsOpportunistic sampling – selecting cases where the opportunity occursOral histories – based on interviews with a person who has had directed orindirect experience with or knowledge of the chosen topicOrder effect – a sequencing effect that occurs from the order in which thetreatment conditions are administeredOrdinal scale – a rank-order scale of measurementOutlier – a number that is very atypical of the other numbers in adistributionPanel study – study where the same individuals are studied at successivepoints over timeParameter – a numerical characteristic of a populationPartial correlation – used to examine the relationship between twoquantitative variables controlling for one or more quantitative extraneousvariablesPartial publication – publishing several articles from the data collected inone large study; is generally not unethical for large studies 128
  • Participant feedback – discussion of the researcher’s conclusions with theactual participantsParticipant-as-observer – researcher spends extended time with the groupas an insider and tells members they are being studiedPath coefficient – a quantitative index providing information about a directeffectPattern matching – predicting a pattern of results and determining if theactual results fit the predicted patternPeer review – discussing one’s interpretations and conclusions with one’speers or colleaguesPercentile ranks – scores that divide a distribution into 100 equal partsPercentile rank – the percentage of scores in a reference group that fallbelow a particular raw scorePeriodicity – the presence of a cyclical pattern in the sampling framePersonal documents – anything written or photographed for privatepurposesPersonality – a multifaceted construct that does not have a generally agreedon definitionPhenomenology – the description of one or more individuals’ consciousnessand experience of a phenomenonPilot test – a preliminary test of your questionnairePoint estimate – the estimated value of a population parameterPoint estimation – the use of the value of a sample statistic as the estimateof the value of a population parameterPopulation – the complete set of cases; it’s the large group to which aresearcher wants to generalize the sample results 129
  • Population validity – the ability to generalize the study results to theindividuals not included in the studyPositive correlation – two variables move in the same directionPositive criticism – ensuring that the statements made or the meaningconveyed in the various sources is correctPositively skewed – skewed to the rightPost hoc fallacy – making the argument that because A preceded B, A musthave caused BPost hoc test – a follow-up test to the analysis of variancePosttest-only control-group design – administering a posttest to tworandomly assigned groups of participants after one group has beenadministered the experimental treatment conditionPractical significance – a conclusion made when a relationship is strongenough to be of practical importancePrediction – attempting to predict or forecast a phenomenonPredictive research – research focused on predicting the future status ofone or more dependent variables based on one or more independentvariablesPredictive validity – validity evidence obtained from assessing therelationship between test scores collected at one point in time and criterionscores obtained at a later timePresence or absence technique – manipulating the independent variable bypresenting one group the treatment condition and withholding it from theother groupPresentism – the assumption that the present-day connotations of terms alsoexisted in the past 130
  • Pretest-posttest control-group design – a research design that administersa posttest to two randomly assigned groups of participants after both havebeen pretested and one of the groups has been administered the experimentaltreatment conditionPrimary source – a source in which the creator was a direct witness or insome other way directly involved or related to the eventPrimary data – original data collected as part of a research studyProbabilistic cause – changes in variable A “tend” to produce changes invariable B; it’s a cause that usually produces an outcomeProbability value – the probability of the result of your research study, oran even more extreme result, assuming that the null hypothesis is trueProbability proportional to size – a type of two-stage cluster samplingwhere each cluster’s chance of being selected in stage one depends on itspopulation sizeProbe – prompt to obtain response clarity or additional informationProblem of induction – things that happened in the past may not happen inthe futureProblem – an interrogative sentence that asks about the relation that existsbetween two or more variablesProportional stratified sampling – type of stratified sampling where thesample proportions are made to be the same as the population proportions onthe stratification variablesProspective study – another term applied to a panel studyPurposive sampling – the researcher specifies the characteristics of thepopulation of interest and locates individuals with those characteristicsQualitative observation – observing all potentially relevant phenomena 131
  • Qualitative research – research relying primarily on the collection ofqualitative dataQuantitative interview – an interview providing qualitative dataQuantitative observation – standardized observationQuantitative variable – a variable that varies in degree or amountQuantitative research – research relying primarily on the collection ofquantitative dataQuasi-experimental research design – an experimental research designthat does not provide for full control of potential confounding variablesprimarily by not randomly assigning participants to comparison groupsQuestionnaire – a self-report data collection instrument filled out byresearch participantQuota sampling – the researcher determines the appropriate sample sizes orquotas for the groups identified as important and takes convenience samplesfrom these groupsRandom assignment – randomly assigning a set of people to differentgroups; it’s a statistical control procedure that maximizes the probability thatthe comparison groups will be equated on all extraneous variablesRange – the difference between the highest and lowest numbersRanking – the ordering of responses into ranksRating scale – a continuum of response choicesRatio scale – a scale of measurement that has a true zero point as well as allthe characteristics of the nominal, ordinal, and interval scalesRationalism – idea that reason is the primary source of knowledge 132
  • Reactivity – an alteration in performance that occurs as a result of beingaware of participating in a study; it refers to changes occurring in peoplebecause they know they are being observedReference group – the norm group used to determine the percentile ranksReflexivity – self-reflection by the researcher on his or her biases andpredispositionsRegression analysis – a set of statistical procedures used to predict thevalues of a dependent variable based on the values of one or moreindependent variablesRegression coefficient – the predicted change in Y given a one-unit changesin XRegression line – the line that best fits a pattern of observationsRegression equation – the equation that defines the regression lineReliability – consistency or stabilityRepeated sampling – drawing many or all-possible samples from apopulationRepeated-measures design – a design in which all participants participatein all experimental treatment conditionsReplication logic – the idea that the more times a research finding is shownto be true with different sets of people, the more confidence we can place inthe finding and in generalizing beyond the original participantsReplication – research examining the same variables with different peopleRepresentative sample – a sample that resembles the populationResearch design – the outline, plan, or strategy used to answer a researchquestion 133
  • Research ethics – a set of principles to guide and assist researchers indeciding which goals are most important and in reconciling conflictingvaluesResearch hypothesis – the hypothesis of interest to the researcher and theone he or she would like to see supported by the study resultsResearch method – overall research design and strategyResearch plan – the outline or plan that will be used in conducting theresearch studyResearch problem – see problemResearcher bias – obtaining results consistent with what the researcherwants to findResearcher-as-detective – metaphor applied to researcher when searchingfor cause and effectResponse rate – the percentage of people in a sample that participate in aresearch studyResponse set – tendency to respond in a specific direction regardless ofcontentRetrospective research – the researcher starts with the dependent variableand moves backward in timeRetrospective questions – questions asking people to recall something froman earlier timeRIE – an index of abstracts of research reportsRule of parsimony – selecting the most simple theory that worksSample – the set of elements taken from a larger populationSampling error – the difference between the value of a sample statistic anda population parameter 134
  • Sampling frame – a list of all the elements in a populationSampling with replacement – it is possible for elements to be selectedmore than onceSampling without replacement – it is not possible for elements to beselected more than onceSampling interval – the population size divided by the desired sample size;it is symbolized by “k”Sampling distribution – the theoretical probability distribution of thevalues of a statistic that results when all possible random samples of aparticular size are drawn from a populationSampling error – the difference between a sample statistic and thecorresponding population parameterSampling distribution of the mean – the theoretical probability distributionof the means of all possible random samples of a particular size drawn froma populationScatterplot – a graph used to depict the relationship between twoquantitative variablesScience – an approach for the generation of knowledgeSecondary data – data originally collected at an earlier time by a differentperson for a different purposeSecondary source – a source that was created from primary sources,secondary sources, or some combination of the twoSegmenting – dividing data into meaningful analytical unitsSelection – selecting participants for the various treatment groups that havedifferent characteristicsSelection by history interaction – occurs when the different comparisongroups experience a different history event 135
  • Selection by maturation interaction – occurs when the differentcomparison groups experience a different rate of change on a maturationvariableSelection-maturation effect – when participants in one of two comparisongroups grow or develop faster than participants in the other comparisongroupSelective coding – the final stage in grounded theory data analysisSemantic differential – a scaling technique where participants rate a seriesof objects or conceptsSequencing effects – biasing effects that can occur when each participantmust participate in each experimental treatment conditionShared values – the culturally defined standards about what is good or bador desirable or undesirableShared beliefs – the specific cultural conventions or statements that peoplewho share a culture hold to be true or falseSignificance level – the cutoff the researcher uses to decide when to rejectthe null hypothesisSignificance testing – a commonly used synonym for hypothesis testingSimple random sample – a sample drawn by a procedure where everymember of the population has an equal chance of being selectedSimple case – when there is only one independent variable and onedependent variableSimple random sampling – the term usually used for sampling withoutreplacementSimple case of correlational research – when there is one quantitativeindependent variable and one quantitative dependent variable 136
  • Simple regression – regression based on one dependent variable and oneindependent variableSimple case of causal-comparative research – when there is onecategorical independent variable and one quantitative dependent variableSingle-case experimental designs – designs that use a single participant toinvestigate the effect of an experimental treatment conditionSkewed – not symmetricalSnowball sampling – each research participant is asked to identify otherpotential research participantsSocial desirability response set – tendency to provide answers that aresocially desirableSourcing – information that identifies the source or attribution of thedocumentSpearman-Brown formula – a statistical formula used for correcting thesplit-half reliability coefficient for the shortened test length created bysplitting the full-length test into two equivalent halvesSplit-half reliability – a measure of the consistency of the scores obtainedfrom two equivalent halves of the same testSpurious relationship – when the relationship between two variables is dueto one or more third variablesStandard error – the standard deviation of a sampling distributionStandard deviation – the square root of the varianceStandard scores – scores that have been converted from one scale toanother to have a particular mean and standard deviationStandardization – presenting the same stimulus to all participants 137
  • Standardized open-ended interview – a set of open-ended questions areasked in a specific order and exactly as wordedStarting point – a randomly selected number between one and kStates – distinguishable, but less enduring ways in which people differStatic-group comparison design – comparing posttest performance of agroup of participants who have been given an experimental treatmentcondition with a group that has not been given the experimental treatmentconditionStatistic – a numerical characteristic of a sampleStatistical regression – the tendency of very high scores to become lowerand very low scores to become higher on post testingStatistically significant – a research finding is probably not attributable tochance; it’s the claim made when the evidence suggests an observed resultwas probably not due to chanceStratification variable – the variable on which the population is dividedStratified sampling – dividing the population into mutually exclusivegroups and then selecting a random sample from each groupStructural equation modeling – see causal modelingSummated rating scale – a multi-item scale that has the responses for eachperson summed into a single scoreSummative evaluation – evaluation focused on determining overalleffectiveness of the evaluation objectSurvey research – a term sometimes applied to non-experimental researchbased on questionnaires or interviewsSynthesis – the selection, organization and analysis of the materialscollected 138
  • Systematic sample – a sample obtained by determining the samplinginterval, selecting a random starting point between 1 and k, and thenselecting every kith elementt test for correlation coefficients – statistical test used to determine if acorrelation coefficient is statistically significantt test for independent samples – statistical test used to determine if thedifference between the means of two groups is statistically significantt test for regression coefficients – statistical test used to determine if aregression coefficient is statistically significantTable of random numbers – a list of numbers that fall in a random orderTarget population – the larger population to whom the study results are tobe generalizedTelephone interview – an interview conducted over the phoneTemporal validity – The extent to which the study results can begeneralized across timeTest-retest reliability – a measure of the consistency of scores over timeTesting – any change in scores obtained on the second administration of atest as a result of having previously taken the testTests in Print – A primary source of information about published testsTheoretical sensitivity – when a researcher is effective at thinking aboutwhat kinds of data need to be collected and what aspects of already collecteddata are the most important for the grounded theoryTheoretical validity – the degree to which a theoretical explanation fits thedataTheoretical saturation – occurs when no new information or concepts areemerging from the data and the grounded theory has been validated 139
  • Theory – an explanation or an explanatory system; a generalization or set ofgeneralizations used systematically to explain some phenomenonTheory triangulation – the use of multiple theories and perspectives to helpinterpret and explain the dataThink-aloud technique – has participants verbalize their thoughts andperceptions while engaged in an activityThird variable – a confounding extraneous variableThird variable problem – an observed relationship between two variablesmay be due to an extraneous variableThree necessary conditions – three things that must be present if you are tocontend that causation has occurredTime interval sampling – checking for events during specific time intervalsTranscription – transforming qualitative data into typed textTrend study – independent samples are taken from a population over timeand the same questions are askedTwo-stage cluster sampling – first a set of clusters is randomly selectedand second a random sample of elements is drawn from each of the clustersselected in stage oneType I error – rejecting a true null hypothesisType II error – failing to reject a false null hypothesisType technique – manipulating the independent variable by varying thetype of variable presented to the different comparison groupsTypical case sampling – selecting what are believed to be average casesTypology – a classification system that breaks something down intodifferent types or kinds 140
  • Unrestricted sampling – the technical term used for sampling withreplacementUpper limit – the largest number on a confidence intervalUtilitarianism – an ethical approach that says judgments of the ethics of astudy depend on the consequences the study has for the research participantsand the benefits that may arise from the studyVagueness – uncertainty in the meaning of words or phrasesValidation – the process of gathering evidence that supports and inferencebased on a test score or scoresValidity coefficient – a correlation coefficient computed between test scoresand criterion scoresValidity – a judgment of the appropriateness of the interpretations,inferences, and actions made on the basis of a test score or scoresVariable – a condition or characteristic that can take on different values orcategoriesVariance – a measure of the average deviation from the mean in squaredunitsY-intercept – the point where the regression line crosses the Y-axisz-score – a raw score that has been transformed into standard deviation units 141
  • PART III:Partial Listing ofSelected Referencesand Acknowledgements 142
  • Partial Listing of Selected References and Acknowledgements PROCEDURES IN EDUCATIONAL RESEARCH AND BASIC STATISTICS Preliminary First Edition/Copyrights Pending/NOT FOR SALE 2005 DirectoriesAmerican Educators Encyclopedia, 1991A Critical Dictionary of Educational Concepts, 2nd edition, 1990The Educator’s Desk Reference: A Sourcebook of Educational Informationand Research, 1989Patterson’s American Education, 2000 Dictionaries and EncyclopediasA Critical Dictionary of Educational Concepts: An Appraisal of SelectedIdeas and Issues in Educational theory and Practice, 1990Encyclopedia of Educational Research 6th edition, 1992Encyclopedia of Ethics, 1992Encyclopedia of Learning and Memory, 1992The Facts on File Dictionary of Education, 1988The International Encyclopedia of Education Research and Studies, 1994World Education Encyclopedia, 1988The World of Learning, 2000 Yearbooks and HandbooksEducator’s Handbooks: A Research Perspective, 1987International Handbook of Education Systems, Vol. 1: Europe and Canada. 143
  • Vol. North Africa and the Middle East. Vol. 3: Asia, Australasian and LatinAmerica, 1988Statistical Yearbook/Annuaire/Statistique/Annuario Estadistico, 1984Comprehensive Dissertation Index 1861-1972; 1973+Dissertation Abstracts Online 1861 – Accessible only from Mugar ReferenceDepartment – Abstracts from 1980+The Dissertation Handbook: A Guide to Successful Dissertations, 2ndedition, 1993 StatisticsBlack Americans: A statistical Sourcebook, Education Reference X E 185.5B63 1990 – Mugar Reference X E 185.5 B63 2000The Condition of Teaching: A State-by-State Analysis – Mugar Reference XLB 2832.2 C66, 1988Digest of Education Statistics – Mugar Reference X L 112 F62Education at a Glance: OECD Indicators – Mugar Reference X LB 2846B56, 2000Index to International Statistics – Mugar Reference X Z 7552 153The National Education Goals Report: Building a Nation of Learners,Education Reference X LA 210 N37Public Schools USA: A comparative Guide to School Districts, EducationReference X LA 217.2 H37, 1991Status of the American Public School Teacher, Education Reference X LB283.2 S7 1987 – Mugar Reference X LB 283.2 1987UNESCO Statistical Digest, education Reference X L 11 S863World Education Report, 1991 144
  • PeriodicalsAmerican Educational Research JournalAmerican Journal of EducationBasic EducationComparative Education ReviewThe Education DigestThe Educational ForumEducational ResearchEducational StudiesEducational TheoryEstimates of School StatisticsHarvard Educational ReviewInternational Journal of Scholarly Academic Intellectual DiversityInternational Forum of Educational RenewalInternational Review of EducationJournal of EducationJournal of Educational and Behavioral StatisticsThe Journal of Educational ResearchNational FORUM of Educational Administration and Supervision Journalhttp://www.nationalforum.com/National FORUM of Applied Educational Research Journalhttp://www.nationalforum.com/National FORUM of Teacher Education Journalhttp://www.nationalforum.com/National FORUM of Special Education Journalhttp://www.nationalforum.com/ 145
  • On-Line Scholarly Electronic Journal Division of National FORUMJournalshttp://www.nationalforum.com/Peabody Journal of EducationRankings of the StatesResearch in EducationReview of Educational ResearchReview of Research in EducationTeaching and Teacher EducationReview of Research in EducationTeaching and Teacher EducationThe Yearbook of the National Society for the Study of Education Web SitesAmerican Demographics www.umich.edu/-nesBureau of Economic Analysis www.bea.doc.govBureau of Labor Statistics www.stats.bis.govCondition of EducationNces.ed.gov. /pubsearch/pubsinfo.asp? pubid=1999022Digest of Education Statistics nces.ed.gov/pubs2000/digest99Encyclopedia of Education statistics nces.ed.gov/edstatsEurostate europa.eu.int/comm../eurostatFerret www.edc.gov/nchs/datawh/ferret/htmFed Stats www.f3dstats.govInternational Archives of Education Data www.icpso.umich.edu/IAEDInter-university Consortium of Political and Social Research 146
  • www.lib.lsu.edu/gov/fedgov.htmlNational Center for Education Statistics nces.ed.govNational Center for Education Statistics – Search Tools and RelatedInformation nces.ed.gov/pubsearchNational Center for Education Statistics – Survey and Program Areasnces.ed.gov/surveysNational Center for Health Statistics www.cdc.gov/nchs/default.htmNATIONAL FORUM JOURNALS www.nationalforum.comProjections of Education Statistics to 2009nces.ed.gov/pubsearch/pubsinfo.asp?pubid=1999038The Qualitative Report www.nove.edu/ssss/ORResearch and Statistics www.ed.gov/stats.htmlResearch Reports from The National Research and Development Centershttp://research.cse.ucla.eduSTAT-USA Internet www.stat-usa.govStatistical Abstracts of the United States www.census.gov/state_abstractStatistical Resources on the Webwww.lib.umich.edu/libhome/Documents.Center/Stats.htmlUniversity of Michigan Documents Center: Statistics Sectionwww.lib.umich.edu/libhome/Documents.centerUniversity of Virginia Social Science Data Centerwww.lib.virginia.edu/social/interactives.html 147
  • Testing and AssessmentBoston.com-MCAS TestsEducational Testing Service Index ericae.net/testcol.htm#ETSTFTest Locator www.ericae.net/testcol.htmUNESCO Research Centers and Education LaboratoriesAmerican Education Research Association www.aera.netCenter for Applied Linguistics www.cal.org/credeCenter for Research on Education, Diversity, and Excellence (CREDE)Center for Research on Evaluation, Standards, and Student Teaching(CRESST) cress96.cse.ucla.eduCenter for Research on the Education of Students Placed At-Risk(CRESPAR) www.csos.jhu.edu/crespar/CreSPaR.htmlCenter for the Improvement of Early Reading Achievement (CIERA)www.ciera.orgCenter for the Study of Teaching and Policy (CTP)Depts., Washington.edu/ctpmailCommon Core of Data: Information on Public Schools and School Districtsin the United States nces.ed.gov/ccd/ccddata.htmlNational Center for Early Development and Learning (NCEDL)www.fpg.unc.edu/~ncedlNational Center for Improving Student Learning and Achievement inMathematics and Science (NCISLA) www.wcer.wise.edu/NCISLANational Center for Postsecondary Improvement (NCPI) ncip.Stanford.eduNational Center for the Study of Adult Learning and Literacy (NCSALL) 148
  • Gseweb.Harvard.edu/-ncsallNational Center on the Gifted and Talented (NRC/GT)www.gifted.uconn.edu/nrcgt.htmlNational Center on Increasing the Effectiveness of State and LocalEducation Reform Efforts www.upenn.ed/gse/cpreNational Research and Development Center on English Learning &Achievement (CELA) eela.Albany.eduResearch Reports from The National Research and Development Centers,research.cse.ucla.edu Ask ERIC www.askeric.orgED Pubs www.ed.gov/pubs/edpubs.htmlEducational Research and Improvements Reports and Studieswww.ed.gov/pubs/studies.htmlEducation Resource Organizations Directory www.ed.gov/Programs/ERODEducational Resources Information Center (ERIC) www.accesseric.orgERIC Clearinghouses www.accesseric.org/sites/barak.htmlERIC Digests www.ed.gov/databases/ERIC_Digests/indexERIC Document Reproduction Service www.edrs.comERIC/AE full Text Internet Library www.ericae.net/ftlib.htmHow to get copies of ERIC Database Materialswww.accesseric.org/resources/pocket/materials.htmlMassachusetts Department of Education www.doe.mass.eduNational Library of Education www.ed.gov/NLEOffice of Educational Research and Improvement (OERI)www.bu.ed/library/research-guides/eduresearch.html 149
  • Search the ERIC Database accesseric.org/searchdb/searchdb.htmlState Departments of Education www.ed.gov Other Selected ReferencesAiken, L. R. (1988). Psychological Testing and Assessment. Boston, MA:Allyn & Bacon.Babbie, E.R. (1989). The Practice of Social Research (5th Edition).Belmont, CA: WadsworthBest, J. & Kahn J. (1998). Research in Education (8th Edition). Boston, MA:Allyn & BaconBorich, G. & Kubiszyn, T. (2000). Educational Testing and Measurement(6th Edition). New York, NY: John Wiley & Sons.Charles, C. M. & Mertler, C. A. (2002). Introduction to EducationalResearch (4th Edition). Boston, MA: Allyn & Bacon.DeMoulin, D.F., Kritsonis, W.A. (2009) A Statistical Journal: Taming of theSkew. Murrieta, CA: AlexisAustinDillamn, D. (1978). Mail and Telephone Surveys: The Total Design Method.New York, NY: John Wiley & Sons.Gall, J. P., Gall, M. D., and Borg, W. R. (2005). Applying EducationalResearch: A Practical Guide. Boston, MA: Pearson.Johnson, B., and Christensen, L. (2004). Educational Research: Quantitative,Qualitative and Mixed Approaches. Pearson Education Inc., Boston, MA:Allyn and BaconKritsonis, W. A. (2002). William Kritsonis, PhD on SCHOOLING.Mansfield, OH: BookMasters.Kritsonis, W. A. (2003). Procedures in Educational Research and Design.Mansfield, OH: BookMasters. 150
  • Mertler, C. (2003). Classroom Assessment. Los Angeles, CA: PyrczakPublishingPopham, W. & Sirotnik, K. (1973). Educational Statistics (2nd Edition).New York, NY: Harper & RowSpatz, C. & Johnson, J. (1989). Basic Statistics. Pacific Grove, CA: Brooks/Cole Publishing.Spcinthall, R. (2000). Basic Statistical Analysis (6th Edition). Boston, MA:Allyn & Bacon.Spcinthall, R., Schmutte, G. T., & Sirois, L. (1990). UnderstandingEducational Research. Englewood cliffs, NJ: Prentice-Hall.Stigler, S. (1986). The History of Statistics. Cambridge, MA: HarvardUniversity Press.Worthen, B. & Sanders, J. (1987). Educational Evaluations. New York,NY: Longman 151
  • PART IV:About the Authors 152
  • ABOUT THE AUTHORSWilliam Allan Kritsonis, PhD is Editor-in-Chief of the NATIONAL FORUMJOURNALS. He is a tenured professor in the PhD Program in EducationalLeadership at Prairie View A&M University/Member Texas A&M UniversitySystem. He was a Visiting Lecturer (2005) at the Oxford Round Table, OrielCollege in the University of Oxford, Oxford, ENGLAND. Dr. Kritsonis is also aDistinguished Alumnus (2004) at Central Washington University in the College ofEducation and Professional Studies, Ellensburg, Washington. He has authored orco-authored numerous articles and conducted several research presentations withstudents and colleagues in the field of education. Dr. Kritsonis has served educationas a school principal, superintendent of schools, director of field experiences andstudent teaching, consultant, and professor.Kimberly Grantham Griffith, Ph.D., is Editor of THE LAMAR UNIVERSITYELECTRONIC JOURNAL OF STUDENT RESEARCH. She is a tenured associateprofessor in the Department of Professional Pedagogy at Lamar University/MemberTexas State University System. Dr. Griffith is also a Councilor (board member) forthe At-Large Division, Council for Undergraduate Research (CUR). In April 2000,she received the prestigious Lamar University Merit Award for teaching excellence.Dr. Griffith serves on the editorial board of the Electronic Journal of InclusiveEducation. She has co-authored numerous articles and conducted several researchpresentations with students and colleagues in the field of education.Cristian Bahrim, Ph.D., is an assistant professor in the Department of Chemistryand Physics at Lamar University and holds a joint-appointment in the Departmentof Electrical Engineering. He is (co-)author in several papers published in peer-reviewed journals/books and conferences’ proceedings. He conducted severalresearch projects in the field of atomic physics, optics, lasers, astronomy and physicseducation. Since 2001, Dr. Bahrim has served as reviewer for the Journal of Physics ofthe Institute of Physics (England), and recently he joined the editorial board of “TheLamar University Electronic Research Journal of Student Research”. Dr. Bahrimreceived the M.S. degree in Physics from University of Bucharest in 1991 and the Ph.D.degree in Physics from University of Paris in 1997. He held a research associate positionin Kansas State University (1999-2001) and he was research assistant in the Institute ofAtomic Physics, Romania (1991-1998). He obtained two outstanding McNair Mentorawards in 2005. Since 2000, he was selected in several Marquis® Who’s Whopublications. Dr. Bahrim was the recipient of a French Government Scholarship(1991-1996). 153
  • David E. Herrington, Ph.D., is an assistant professor in the Department ofEducational Leadership and Counseling at Prairie View A&M University/MemberTexas A&M University System. He has supervised more than 2,000 studentresearch field-based projects. Dr. Herrington has co-authored numerous articleswith students in the area of education. He believes that everyone uses statisticalthinking and inductive reasoning in everyday life. Making students aware of theirexisting skills and knowledge in these areas provides them with a sense that, inmany ways, statistical reasoning and scientific processes are familiar. The value of astatistics course comes from the development of the specialized vocabulary,participative data gathering methods, and data analysis techniques that can enhanceor leverage existing conceptual frameworks that students bring into the learningprocess.Robert L. Marshall, EdD is the Senior National Editor of NATIONAL FORUMJOURNALS. He is a tenured professor at Western Illinois faculty in theEducational Leadership Department. His background in education spans over 25years and includes teaching in secondary public schools, campus as well as districtlevel administrative experience and ten year in higher education as a professor ofEducational Leadership in the Texas A&M University System. Dr. Marshallsresearch interests are in the areas of distance education, secondary student successinitiatives along with studies related to the principalship and superintendency inpublic schools. Copyright © 2006 William Allan Kritsonis, Ph.D. ALL RIGHTS RESERVED/FOREVER 154