SlideShare is now on Android. 15 million presentations at your fingertips.  Get the app

×
  • Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
 

Education In Korea (Ppt)

by Professor  at Schulich School of Education, Nipissing University on Sep 24, 2007

  • 19,205 views

 

Statistics

Views

Total Views
19,205
Views on SlideShare
19,055
Embed Views
150

Actions

Likes
8
Downloads
1,481
Comments
47

7 Embeds 150

http://asia.2803.com 81
http://www.slideshare.net 44
http://eflclassroom.ning.com 21
http://localhost 1
http://campus.educastur.es 1
http://74.125.155.132 1
http://translate.googleusercontent.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via SlideShare as Microsoft PowerPoint

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

110 of 47 previous next Post a comment

  • guest80cdad Ultracet http://www.fioricetsupply.com is the place to resolve the price problem. Buy now and make a deal for you. 4 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Sharonhps Sharon Ha, Internet marketing & at Working full time very interesting, to know about your country. 5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • alysaally Alysaally Education in korea is very nice. 5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 The Basics Understanding and Using Statistics by William Allan Kritsonis, PhD



    1. The most common skill necessary for doing statistics is counting. For example:



    a. the number of days a student is present or absent

    b. the number of items correct or incorrect on a test

    c. the number of discipline referrals

    d. frequency of unacceptable or desirable behaviors

    e. the number of attempts required to master a skill



    2. The second most common skill used in statistics is measurement. For example, things we measure in education include:



    a. achievement of individuals or achievement gaps between groups

    b. aptitude

    c. interest

    d. skill level

    e. knowledge

    f. attitudes of teachers, parents, students toward specific thing

    g. opinions of various constituencies

    h. beliefs of important players in the organization

    i. level and type of motivation

    j. degree of improvement

    k. progress

    l. behaviors

    3. The most frequently applied mathematical operations in statistics include addition, subtraction, multiplication, and division.

    If you know how to count, measure, add, subtract, multiply, and divide, then you ALREADY possess the skills necessary to do statistics.





    4. Many statistical concepts have become a part of our daily vocabulary.

    We use these concepts without thinking. For example:



    a. I am going to calculate the “average.” (statisticians call this the arithmetic mean or mean)



    b. She is above average. (statisticians say more precisely that her performance on a measurement was one, two or three standard deviations above the mean.)



    c. I am 99.9% sure. (statisticians call this p < .001 or confidence level; that is to say, these results were not due to accident or chance)



    d. That information seems a bit “skewed.” (statisticians say that the mean and median are not equal and that the distribution is positively or negatively skewed)



    e. There is a correlation between this and that. (statisticians say that there is a statistically significant relationship between this and that. The correlation is usually stated in numeric form, for example r=.34, p< .01)





    5. Established research designs and procedures for calculating and thinking about statistics already exist. All you have to do is learn the directions and follow them. Making your easier are the facts that:



    a. Research design tells you what data to gather.



    b. Statistical procedures and formula already exist and can be used for calculating your data.



    c. Statistical software such as the Statistical Package for Social Sciences (S.P.S.S.) and S.A.S. make the analysis of your data very systematic and complete including tables, graphs and charts.



    1) SPSS is a quality software application for students in the initial stage of learning statistical analyses. In addition, SPSS is a low cost resources to students and it provides professional statistical analysis and tools in a user friendly software environment for both MAC and PC users. A list of resources for learning SPSS is provided at the end of the chapter.



    2) SAS is a more complex package with high levels of statistical analysis capabilities. SAS handles a wide variety of specialized functions for data analysis and procedures. This software package is utilized extensively in business, industry as well as educational settings. tools for both specialized and enterprise-wide analytical needs. SAS is provided for PC, UNIX, and mainframe computer platforms. A list of resources for learning SAS is provided at the end of the chapter.



    6. In a very short time you will realize that you can use your existing skills but will use them MORE skillfully when you do statistics.



    a. By counting, measuring, comparing, and examining relationships of the RIGHT things you will be able to skillfully analyze data and draw accurate and MEANINGFUL conclusions.



    b. You will learn to use your findings and conclusions to make better informed educational decisions.









    Web Resources for SPSS

    • http://www.utexas.edu/its/rc/tutorials/stat/spss/spss1/index.html

    • http://www.ats.ucla.edu/STAT/mult_pkg/whatstat/default.htm

    • http://www.stat.tamu.edu/spss.php

    • http://www.spsstools.net/spss.htm

    • http://cs.furman.edu/rushing/mellonj/spss1.htm

    • http://www.ats.ucla.edu/stat/spss/examples/default.htm

    • http://www.psych.utoronto.ca/courses/c1/spss/toc.htm

    • http://www.ats.ucla.edu/stat/spss/modules/default.htm

    • http://data.fas.harvard.edu/projects/SPSS_Tutorial/spsstut.shtml

    • http://www.cas.lancs.ac.uk/short_courses/intro_spss.html

    • http://www.cas.lancs.ac.uk/short_courses/notes/intro_spss/session1.pdf

    • http://www.bris.ac.uk/is/learning/documentation/spss-t2/spss-t2.pdf

    • http://calcnet.mth.cmich.edu/org/spss/toc.htm

    • http://www.indiana.edu/~statmath/stat/spss/

    • http://dl.lib.brown.edu/gateway/ssds/SPSS%202%20Hypothesis%20Testing%20and%20Inferential%20Statistics.pdf

    • http://dl.lib.brown.edu/gateway/ssds/SPSS1%20Finding%20and%20Managing%20Data%20for%20the%20Social%20Sciences.pdf

    • http://www.shef.ac.uk/scharr/spss/index2.htm



    Web Resources for SAS



    • http://www.itc.virginia.edu/research/sas/training/v8/

    • http://www.ats.ucla.edu/stat/sas/sk/

    • http://www.ssc.wisc.edu/sscc/pubs/stat.htm

    • http://web.fccj.org/~jtrifile/SAS2.html

    • http://www.utexas.edu/cc/stat/tutorials/sas8/sas8.html

    • http://www.ats.ucla.edu/stat/sas/modules/

    • http://www.psych.yorku.ca/lab/sas/

    • http://instruct.uwo.ca/sociology/300a/SASintro.htm

    • http://web.utk.edu/~leon/jmp/

    • http://www.stat.unc.edu/students/owzar/stat101.html
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 Getting Started With Research: Avoiding the Pitfalls by William Allan Kritsonis, PhD & David E. Herrington, PhD - PVAMU, The Texas A&M University System



    Any of the following mistakes can prevent a study from getting off the ground or being carried out to completion. Avoid these mistakes by listening to the voice of experienced professors when they tell you to modify your study. Consider the following mistakes and the proposed solutions.



    1. Research is conducted with conflicting purposes or research questions that do not match your stated purpose. Research efforts may halt due to the confusion.



    Solution: Write the purpose and research questions with clarity and simplicity. Allow expert writers to critique your work and take their suggestions seriously.



    2. Researcher fails to distinguish between the practical problem and the research problem. She may try to save the whales with her study when a better understanding of the problems that endanger the whales is needed. The study may prove too unwieldy to complete. The goal is may be too grandiose to be unattainable.



    Solution: Map out the entire research agenda necessary to address a practical problem then carefully carve out for your own study the part that is most significant and workable. Remember that your goal is to finish.



    3. Researcher attempts to make the study overly complex when a simpler design would yield equally useful information. The study may become unwieldy and may obfuscate rather than shed light on the subject.



    Solution: Examine all research questions included in your study and rank them in order of the significance and usefulness. If any data do not help fulfill the purpose of your study, then these should be dropped so that the other areas can stand out.



    4. Researcher attempts to define the problem and purpose of the study without first engaging in an extensive reading of all relevant literature. This results in a superficial or naïve study that is not very useful.



    Solution: Read everything you can get your hands on systematically sort the types of studies and conceptual areas. Your study will take on a well-informed vision of what more needs to be known.



    5. Researcher defines the problem and purpose of the study without first seeking the counsel of experts who are knowledgeable about the subject. Once completed, the study may lack credibility with practitioners.



    Solution: Spend a great deal of time talking to practitioners about the problems they face when dealing with the issues that you are interested in writing about. Let them provide you with an expert perspective as you seek to define the problem and purpose of your study.



    6. Researcher uses methodologies that he does not understand well. If the design is inappropriate to the purpose of the study or the form of the data is wrong, he may be unable to interpret the data or complete the study.



    Solution: Consult statistics and research design experts regarding your goals as a researcher. Take courses that you need to become proficient in the specific methodologies that you wish to apply to your study.



    7. The methodology or the title of the study drives the study rather than the purpose. When a study driven primarily by methodology, the purpose and significance are diminished to make the study easier to complete. This may result in a less significant or useful study.



    Solution: Do not title your work until you understand the research problem well and the purpose that your study will reflect. Avoid selecting a cool sounding methodology until you are certain that there it will help you answer the specific things that you need to know.



    8. Catchy phrases or terms are used to define the purpose and problem while little attention is paid to the significance of a study. Study may be well done, or even interesting, but may not be very useful.



    Solution: The significance of a study can mean the difference in whether the study is published or whether it actually is read. Understand who the intended audience of a study may be and try to address their interests and needs and particularly what they need to know.



    9. Study is not sufficiently delineated and limited so that the time or effort required to complete the study becomes overwhelming.



    Solution: Listen to your professors when they tell you the study may take a lot longer if it is not narrowed down. Provide a “recommendations for further research” section in your work so that extraneous matters may be addressed in the future by you or other researchers.
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 Ethics and Research by William Allan Kritsonis, PhD & David E. Herrington, PhD -- PVAMU, The Texas A&M University System



    1. Responsible conduct guiding researchers. Universities, federal and state government as well as professional organizations have guidelines on ethical behavior and research.



    2. Informed consent - Participants must be informed and voluntarily give their consent to participate in a study.



    - Participants must be fully informed about all procedures and possible risks.

    - Participants informed of purpose of research and how data will be used.

    - Benefits of study.

    - Alternative treatments and potential compensation.

    - They must understand and arrive at a decision without coercion. (Voluntary participation)

    - Starts before the research begins.

    - Privacy and confidentiality of research subjects and data .

    - Contacts

    - Approval of the IRB (Internal Review Board)



    3. Termination of research if harm is likely. Risk-benefit assessments.



    4. Special protection for vulnerable populations of research subjects.



    5. Equitable recruitment of participants.



    6. Results should be for the good of society and unattainable by any other means.



    7. Beneficence - To promote understanding and shed light on the human condition. Protection of those participating in the study.



    8. Honesty - No data to be suppressed, data should be reported as collected.



    9. Misconduct



    - Fabrication

    - Falsification

    - Plagiarism





    SUGGESTED STUDENT ACTIVITIES:



    1. In small groups discuss the relationship between academic freedom and research ethics. Share your discussion with the entire class.



    2. What steps should researchers take to ensure all areas of informed consent are addressed in their research study? Share your discussion with the class.



    3. What steps would you take to make sure you are not involved in unethical conduct in research? Share your discussion with the class.







    WEBSITES

    APA's Research “Ethics and Regulation”

    http://www.apa.org/science/research.html



    National Institutes of Health (NIH) “Bioethics Resources” http://www.nih.gov/sigs/bioethics/index.html



    Research Ethics

    http://faculty.ncwc.edu/toconnor/308/308lect10.htm



    The National Institutes of Health (NIH) 'Human Participants Protections Education for Research Teams”

    http://ethics.od.nih.gov/



    The Department of Health and Human Services' (DHHS) Office of Research Integrity http://www.ori.hhs.gov/
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 Ethics in Research on Human Subjects and the Role of the Institutional Review Board

    Frequently Asked Questions by William Allan Kritsonis, PhD & David E. Herrington, PhD, PVAMU - The Texas A&M University System



    1. What is an IRB?



    The IRB is a committee that is assigned the task of reviewing proposed research by a university or other institution that receives federal funds and is in the business of conducting research on human subjects. The IRB is required by part 46 of Title 45 of the Code of Federal Regulations also called 45 CFR 46. According to the Department of Health and Human Services, it is the responsibility of the IRB to recommend to university officials that proposed research either be approved or disapproved based on a set of rules called the Common Rule.



    2. Why do we have IRBs?



    Every institution that conducts research on human subjects that also receives federal funds must provided a formal mechanism for ensuring that research is conducted in a manner that reflects nationally recognized standards. Failure to comply with policy can place the researcher and his institution at risk for litigation. In some a few instances the federal government has temporarily suspended all research activities at key research universities for failure to comply with the law.



    3. What is the Common Rule



    The Common Rule was established in 1991 in federal law 45 CFR 46.112. It details all of the areas of compliance with accepted norms for conducting research on human subjects established by the Helsinki Agreement and a series of declarations referred to as the Belmont Report. These principles re detailed in the Common Rule. These include:

    a. informed consent

    b. protection of confidentiality or anonymity of all human subjects

    c. acknowledging the right of the subject not to participate in a study

    d. ensuring that subject is aware of his or her right to discontinue the study at any time without adverse consequence

    e. ensuring that the study provides a benefit to the community

    f. ensuring that the study has a direct benefit for the subject participating in the study

    g. ensuring that the subject is aware of the risks involved in the study

    h. ensuring that the researcher has found less invasive or intrusive ways to obtain the same information

    i. that the individual subject has given permission to be deceived during an experimental study

    j. that parents have granted permission for children under the age of 18 to participate

    k. that any psychological or physical harms will be remedied with expenses paid by the researchers.

    l. the researcher is protected from possible harms or is taking informed risks

    m. specific measures for achieving each of the above has been spelled out

    n. that theses measures are meticulously followed.



    4. Are all studies subject to IRB approval?



    No. However all studies that will involve gathering data from the public or that will be published in some form must be reviewed before university officials will approve the protocol. To accommodate social science research and historical research expedited review protocols are submitted. Studies that must be reviewed meet the following criteria:



    a. the results will be published

    b. the study involves experimentation on human subjects

    c. the study is invasive or intrusive in some way

    d. the study involves deception

    e. there are possible risks to the subject

    f. there may be no community benefit or direct benefit for the subject

    g. there is a possible conflict of interest by researchers in the study

    h. medical or mental health research

    5. When my study has been approved by the IRB, are there any additional requirements that researchers must follow?



    Yes. The Common Rule states that research approved by an IRB may be subject to further review for approval or disapproval by officials of the institution under the following circumstances:



    a. if a third party complains of possible wrong-doing or harms realized

    b. a senior administrator at the university may raise questions that would result in a follow-up IRB review.
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 RESEARCH, WRITING & PUBLICATION by William Allan Kritsonis, PhD & David E. Herrington, PhD - PhD Program in Educational Leadership, PVAMU, Texas A&M University System.





    1. Brainstorm ideas for research and possible publication.



    - Look at current journals to see what is current or a “hot” topic. Many also have a “Call for Papers” listing the topics they plan to publish in future editions.

    - Ask professional educational organizations what topics are popular or important issues in their field of education.

    - Think about what interests you. You have to live with the topic until you complete it. If you are not interested in the topic, it will become boring or be difficult to keep on task and complete.

    - Find out if a colleague or another person in the field of education has a project, interest, etc. that you could work on with them.

    - Find out if a textbook company is looking for someone to write a chapter in a textbook. These might be on their website or they might send an email to those on their listserve.



    2. Determine the type of manuscript you want to write. (NOTE: You are working on a manuscript. Many people call or interchange the term article for manuscript. A MANUSCRIPT is work that is submitted for possible publication. An ARTICLE is a manuscript that has been published.)



    - Objective survey of the literature available on a topic

    - Analysis of literature to support the author’s viewpoint

    - Interpretive paper on a specific theory, concept, etc.

    - Theory paper that develops a new conceptual framework

    - Research paper - describing the study, participants, results, conclusions, etc.

    - Chapter for a textbook (They are the easiest to be accepted since they do not have to go through a blind peer-review process)

    - Other types of papers as indicated in the professional journals you read





    3. It's also important to know what types of manuscripts a journal typically publishes.



    - The library should have current issues for your review. Many can be found online.

    - Review the types of article in several issues of the journal. Do they accept a variety of topics for publication or do they have a theme for the issue?

    - Read the submission or author guidelines. Many can be found online.

    - Look at the expertise of the members of the editorial board for ideas on their research interests.



    4. The acceptance rates of journals can range from 80% to 5%. Look at publishing in journals where the turnaround time may be shorter. Journals which have very high submission rates have high rejection rates. Look at using your time wisely. Don’t “tie up” an article for 18 months if the journal has a low acceptance rate.



    5. Ask colleagues which journals they have submitted manuscripts to. They can give good advice on the “where to” and “where not to” for submissions.



    6. Determine which journal you will submit your manuscript. It is important to know where you are going to know how to begin the writing process. It is like taking a trip. You can have a well organized vacation by using a map or a “fly by the seat of your pants” experience without the map. You save time, energy and have a greater chance for successful publication by knowing where you are going. (Remember research ethics. Only submit your manuscript to one journal at a time. You can submit to another journal if you receive notice that your manuscript will not be published by the editor.)



    7. When possible, collaborate in writing! A group of two or more can share ideas and the work.



    - Decide on the topic

    - Decide the role and responsibility of each team member. (Use each other’s talents. Some are better at writing, others at finding the references, others at editing, etc.)

    - Set timelines

    - Meet on a regular basis to keep each other on task, and make changes as needed.

    8. Schedule a time to write every day. Make it automatic! Thirty to ninety minutes a day, or at least three times a week. This will help you to stay on target and not get overwhelmed at the last minute when your writing project is due.



    9. Develop an outline for your manuscript. You can read the published articles in the journal where you plan to submit and determine what type of outline to develop.



    10. Write your introduction and summary first. Most problems are found in these sections. They become a guide to your manuscript (a roadmap)! It will keep you focused on the route you are taking.



    11. As you write make sure the manuscript indicate you know what is current on that topic. Make sure to have at least one to two references from the same year you plan to submit your manuscript.



    12. Make sure your manuscript has a solid conceptual basis.



    13. Make sure that findings in your conclusion have been substantiated in your paper.



    14. When the paper is well organized and near completion have a couple of colleagues review and edit it.



    - Does it make sense to someone else who has read it?

    - Does it follow the publication style? (APA, Chicago, MLA, etc.)



    15. Tips for submitting your manuscript after it is completed:



    - Make sure you have the exact copies required.

    - Write a cover letter with the current editor’s name.

    - The cover letter should be neat and a brief description of your manuscript, why you are submitting it and your contact information.

    - If an online submission, are all guidelines for submission followed?

    - If mailing the manuscript, make sure you have the post office weigh the envelope so you can buy the correct amount for postage.





    16. Most editors will document they have received your manuscript through a letter or email. If you do not receive a letter within a couple of weeks documenting that your manuscript was received then call or email the editor to check to see if the manuscript was received. Remember FedEX trucks and mail trucks have crashed and hurricanes have damaged mail. Sometimes forces of nature and accidents do cause a manuscript to fall by the wayside.



    17. If you get an acceptance letter, GREAT JOB!! If you receive a letter indicating the manuscript was not accepted for publication. review the editorial comments.

    - Revise and resubmit if the editor indicates this should be done.

    - If you have questions about the comments made by reviews, contact the editor and ask them for clarification.

    - Ask the editor if they have a suggestion for another journal that might be more appropriate.

    - Revise and look at other potential journals for possible publication.

    - Don’t worry, your manuscript might not have been the “right fit” for that journal or the right time to be submitted there.

    - Sometimes a journal receives several manuscripts on the same topic. The topic might be saturated. Look for another journal to submit the manuscript.

    - Take heart that everyone will get some “rejection” letters. One of your authors had that experience four times on her first manuscript. Although I kept writing other manuscripts and those were being accepted, the first one was rejected four times. On the fifth submission it was published.

    NEVER GIVE UP, JUST KEEP SEARCHING FOR THE RIGHT JOURNAL.
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 Fundamental Terms in Educational Research

    and Basic Statistics - Complied by William Allan Kritsonis, PhD, Professor (Tenured) PVAMU, Texas A&M University System



    A priori codes – codes developed before examining the current data



    A-B-A design – a single-case experimental design in which the response to the experimental treatment condition is compared to baseline responses taken before and after administering the treatment condition



    A-B-A-B design – an A-B-A design that is extended to include the reintroduction of the treatment condition



    Accessible population – the research participants available for participation in the research



    Achievement tests – tests designed to measure the degree of learning that has taken place after being exposed to a specific learning experience



    Acquiescence response set – tendency to either agree or to disagree



    Action research – applied research focused on solving practitioner’s problems



    Alternative hypothesis – statement that the population parameter is some value other than the value stated by the null hypothesis



    Amount technique – manipulating the independent variable by giving the various comparison groups different amounts of the independent variable.



    Analysis of covariance – used to examine the relationship between one categorical independent variable and one quantitative dependent variable controlling for one or more extraneous variables; it’s a statistical method that can be used to statistically “equate” groups that differ on a pretest or some other variable



    Analysis of variance – see one-way analysis of variance



    Anchor – a written descriptor for a point on a rating scale



    Anonymity – keeping the identity of the participant from everyone, including the researcher



    Applied research – research about practical questions



    Aptitude tests – tests that focus on information acquired through the informal learning that goes on in life



    Archived research data – data originally used for research purposes and then stored



    Axial coding – the second stage in grounded theory data analysis



    Back stage behavior – what people say and do only with their closest friends



    Bar graph – a graph that uses vertical bars to represent the data



    Baseline – the behavior of the participant prior to the administration of a treatment condition



    Basic research – research about fundamental processes



    Boolean operators – words used to create logical combinations



    Bracket – to suspend your preconceptions or learned feelings about a phenomenon



    Carryover effect – a sequencing effect that occurs when performance in one treatment conditions is influenced by participation in a prior treatment condition(s)



    Case – a bounded system



    Case study research – research that provides a detailed account and analysis of one or more cases



    Categorical variable – a variable that varies in type or kind

    Causal modeling – a form of explanatory research where the researcher hypothesizes a causal model and then empirically tests the model. Also called structural equation modeling or theoretical modeling.



    Causal-comparative research – a form of non-experimental research where the primary independent variable of interest is categorical



    Cause and effect relationship – when one variable affects another variable



    Cell – a combination of two or more independent variables in a factorial design



    Census – a study of the whole population rather than a sample



    Changing-criterion design – a single-case experimental design in which a participant’s behavior is gradually altered by changing the criterion for success during successive treatment periods



    Checklist – a list of response categories that respondents check if appropriate



    Chi square test for contingency tables – statistical test used to determine if a relationship observed in a contingency table is statistically significant



    CIJE – an annotated index of articles from educational journals



    Closed-ended question – a question that forces participants to choose a response



    Cluster -- a collective type of unit that includes multiple elements



    Cluster sampling – type of sampling where clusters are randomly selected



    Co-occurring codes – sets of codes that partially or completely overlap



    Coding – marking segments of data with symbols, descriptive words, or category names



    Coefficient alpha – a variant of the Kuder-Richardson formula that provides an estimate of the reliability of a homogenous test

    Cohort – any group of people with a common classification or common characteristic



    Cohort study – longitudinal research focusing specifically on one or more cohorts



    Collective case study – studying multiple cases in one research study



    Complete participant – researcher becomes member of group being studied and does not tell members they are being studied



    Complete observer – researcher observes as an outsider and does not tell the people they are being observed



    Comprehensive sampling – including all cases in the research study



    Concurrent validity – validity evidence obtained from assessing the relationship between test scores and criterion scores obtained at the same time



    Confidence interval – a range of numbers inferred from the sample that has a certain probability of including the population parameter



    Confidence limits – the endpoints of a confidence interval



    Confidentiality – not revealing the identity of the participant to anyone other than the researcher and the researcher’s staff



    Confounding variable – an extraneous variable that systematically varies with the independent variable and also influences the dependent variable



    Constant – a single value or category of a variable



    Constant comparative method – data analysis in grounded theory research



    Construct validity – evidence that a theoretical construct can be inferred from the scores on a test



    Construct – an informed, scientific idea developed or “constructed” to describe or explain behavior

    Content validity – a judgment of the degree to which the items, tasks, or questions on a test adequately sample the domain of interest



    Contextualization – the identification of when and where an event took place



    Contingency table – a table displaying information in cells formed by the intersection of two or more categorical variables



    Control group – the group that does not receive the experimental treatment condition



    Convenience sampling – people who are available or volunteer or can be easily recruited are included in the sample



    Convergent evidence – evidence that the scores on prior tests and the current test designed to measure the same construct are correlated



    Correlation coefficient – an index indicating the strength and direction of relationship between two variables



    Correlational research – a form of non-experimental research where the primary independent or predictor variable of interest is quantitative



    Corroboration – comparing documents to each other to determine whether they provide the same information or reach the same conclusion



    Counterbalancing – administering the experimental treatment conditions to all comparison groups, but in a different order



    Criterion of falsifiability – statements and theories should be “refutable”



    Criterion-related validity – a judgment of the extent to which scores from a test can be used to predict or infer performance in some activity



    Critical case sampling – selecting what are believed to be particularly important cases



    Cronbach’s alpha – see coefficient alpha



    Cross-sectional research – data are collected at a single point in time



    Culture – a system of shared beliefs, values, practices, perspectives, folk knowledge, language, norms, rituals, and material objects and artifacts that the members of a group use in understanding their world and in relating to others



    Data set – a set of data



    Data triangulation – the use of multiple data sources



    Debriefing – a post study interview in which all aspects of the study are revealed, any reasons for deception are explained, and any questions the participant has about the study are answered



    Deception – misleading or withholding information from the research participant



    Deductive reasoning – drawing a specific conclusion from a set of premises



    Deductive method – a top down or confirmatory approach to science



    Dehoaxing – informing participants about any deception used and the reasons for its use



    Deontological approach – an ethnical approach that says ethical issues must be judged on the basis of some universal code



    Dependent variable – a variable that is presumed to be influenced by one or more independent variables



    Description – attempting to describe the characteristics of a phenomenon



    Descriptive validity – the factual accuracy of an account as reported by the researcher



    Descriptive research – research focused on providing an accurate description or picture of the status or characteristics of a situation or phenomenon



    Descriptive statistics – division of statistics focused on describing, summarizing, or making sense of a particular set of data



    Desensitizing – reducing or eliminating any stress or other undesirable feelings the participant may have as a result of participating in the study.



    Determinism – the assumption that all events have causes



    Diagnostic tests – tests designed to identify where a student is having difficulty with an academic skill



    Diagramming – making a sketch, drawing, or outline to show how something works or to clarify the relationship between the parts of a whole



    Differential attrition – when participants do not drop out randomly



    Differential influence – when the influence of an extraneous variable is different for the various comparison groups



    Direct effect – the effect of the variable at the origin of an arrow on the variable at the receiving end of the arrow



    Directional alternative hypothesis – an alternative hypothesis that contains either a “greater than” sign or a “less than” sign



    Discriminant evidence – evidence that the scores on the newly developed test are not correlated with the scores on tests designed to measure theoretically different constructs



    Disproportional stratified sampling – type of stratified sampling where the sample proportions are made to be different from the population proportions on the stratification variable



    Double negative – a sentence construction that includes two negatives



    Double-barreled question – a question that combines two or more issues or attitude objects



    Duplicate publication – publishing the same data and results in more than one journal or in other publications

    Ecological validity – the ability to generalize the study results across settings



    Effect size indicator – a statistical measure of the strength of a relationship



    Element – the basic unit that is selected from the population



    Emic term – a special word or term used by the people in a group



    Emic perspective – the insider’s perspective



    Empirical – based on observation or experience



    Empiricism – idea that knowledge comes from experience



    Enumeration – the process of quantifying data



    Equal probability selection method – any sampling method where each member of the population has an equal chance of being selected



    Equivalent-forms reliability – a measure of the consistency of a group of individuals’ scores on two equivalent forms of a test measuring the same construct



    ERIC – a database containing information from CIJE and RIE



    Essence – the invariant structure of the experience



    Ethical skepticism – an ethical approach that says concrete and inviolate moral codes cannot be formulated



    Ethnocentrism – judging people from a different culture according to the standards of your own culture



    Ethnography – the discovery and comprehensive description of the culture of a group of people; it’s a form of qualitative research focused on describing the culture of a group of people



    Ethnohistory – the study of the cultural past of a group of people



    Ethnology – the comparative study of cultural groups



    Etic term – outsider’s words or special words that are used by social scientists



    Etic perspective – an external, social scientific view of reality



    Evaluation – determining the worth, merit, or quality of an evaluation object



    Event sampling – observing only after specific events have occurred



    Exhaustive categories – a set of categories that classify all of the relevant cases in the data



    Exhaustive – property that response categories or intervals include all possible responses



    Expectancy data – data illustrating the number or percentage of people that fall into various categories on a criterion measure



    Experiment – an environment in which the researcher objectively observes phenomena that are made to occur in a strictly controlled situation in which one or more variables are varied and the others are kept constant



    Experimental group – the group that receives the experimental treatment condition



    Experimental control – eliminating any differential influence of extraneous variables



    Experimenter effect – the unintentional effect that the researcher can have on the outcome of a study



    Explanation – attempting to show how and why a phenomenon operates as it does



    Explanatory research – testing hypotheses and theories that explain how and why a phenomenon operates as it does



    Exploration – attempting to generate ideas about phenomena



    Extended fieldwork – collecting data in the field over an extended period of time



    External validity – the extent to which the study results can be generalized to and across populations of persons, settings and times



    External criticism – determining the validity, trustworthiness, or authenticity of the source



    Extraneous variable – A variable that may compete with the independent variable in explaining the outcome; any variable other than the independent variable that may influence the dependent variable



    Extreme case sampling – identifying the “extremes” or poles of some characteristic and then selecting cases representing these extremes for examination



    Facesheet codes – codes that apply to a complete document or case



    Factor analysis – a statistical procedure that identifies the minimum number of “factors,” or dimensions, measured by a test



    Factorial design – based on a mixed model – a factorial design in which different participants are randomly assigned to the different levels of one independent variable but all participants take all levels of another independent variable; it’s a design in which two or more independent variables are simultaneously studied to determine their independent and interactive effects on the dependent variable



    Fieldnotes – notes taken by the observer



    Filter question – an item that directs participants to different follow-up questions depending on the response



    Focus group – a moderator leads a discussion with a small group of people



    Formative evaluation – evaluation focused on improving the evaluation object

    Frequency distribution – arrangement where the frequencies of each unique data value is shown



    Front stage behavior – what people want or allow us to see



    Fully anchored rating scale – all points are anchored on the rating scale



    General linear model – a mathematical procedure that is the “parent” of many statistical techniques



    Generalize – making statements about a population based on sample data



    Going native – identifying so completely with the group being studied that you can no longer remain objective



    Grounded theory – a general methodology for developing theory that is grounded in data systematically gathered and analyzed; a qualitative research approach



    Group moderator -- the person leading the focus group discussion



    Group frequency distribution – the data values are clustered or grouped into separate intervals and the frequencies of each interval is given Heterogeneous – a set of numbers with a great of variability



    Historical research – the process of systematically examining past events or combinations of events to arrive at an account of what happened in the past History – any event, other than a planned treatment event that occurs between the pre- and post measurement of the dependent variable and influences the post measurement of the dependent variable



    Holistic description – the description of how members of groups make up a group



    Homogeneity – in test validity, refers to how well a test measures a single construct



    Homogeneous sample selection – selecting a small and homogeneous case or set of cases for intensive study



    Homogeneous – a set of numbers with little variability



    Hypothesis – a prediction or educated guess



    Hypothesis – a prediction or guess of the relation that exists among the variables being investigated



    Hypothesis testing – the branch of inferential statistics concerned with how well the sample data support a null hypothesis and when the null hypothesis can be rejected In-person interview – an interview conducted face to face



    Independent variable – a variable that is presumed to cause a change in another variable



    Indirect effect – an effect occurring through an intervening variable



    Inductive reasoning – reasoning from the particular to the general



    Inductive codes – codes generated by a researcher by directly examining the data



    Inductive method – a bottom up or generative approach to science



    Inferential statistics – division of statistics focused on going beyond the immediate data and inferring the characteristics of population based on samples



    Inferential statistics – use of the laws of probability to make inferences and draw statistical conclusions about populations based on sample data



    Influence – attempting to apply research to change behavior



    Informal conversational interview – spontaneous, loosely structured interview



    Instrumental case study – interest is in understanding something more general than the particular case



    Instrumentation – any change that occurs in the way the dependent variable is measured

    Intelligence – the ability to think abstractly and to learn readily from experience



    Inter-scorer reliability – the degree of agreement between two or more scorers, judges, or raters



    Interaction with selection – occurs when the different comparison groups are affected differently by one of the threats to internal validity



    Interaction effect – when the effect of one independent variable depends on the level of another independent variable



    Intercoder reliability – consistency among different coders



    Interim analysis – the cyclical process of collecting and analyzing data during a single research study



    Internal consistency – the consistency with which a test measures a single construct



    Internal validity – the ability to infer that a causal relationship exists



    Internal criticism – the reliability or accuracy of the information contained in the sources collected



    Internet – a network of millions of computers joined to promote communication



    Interpretive validity – accurately portraying the meaning given by the participants to what is being studied



    Interrupted time-series design – a design in which a treatment condition is assessed by comparing the pattern of posttest responses obtained from a single group of participants



    Interval scale – a scale of measurement that has equal intervals of distances between adjacent numbers



    Intervening variable – a variable occurring between two other variables in a causal chain

    Interview – a data collection method where interviewer asks interviewee questions



    Interview guide approach – specific topics and/or open-ended questions are asked in any order



    Interview protocol – data collection instrument used in an interview



    Interviewee – the person being asked questions



    Interviewer – the person asking the questions



    Intracoder reliability – consistency within a single individual



    Intrinsic case study – interest is in understanding a specific case



    Investigator triangulation – the use of multiple investigators in collecting and interpreting the data



    IRB – the institutional review committee that assesses the ethical acceptability of research proposals



    Item stem – the set of words forming a question or statement Kuder-Richardson formula 20 – a statistical formula used to compute an estimate of the reliability of a homogeneous test



    Laboratory observation – observation done in a lab or other setting set up by the researcher



    Leading question – a question that suggests a researcher is expecting a certain answer



    Level of confidence – the probability that a confidence interval to be constructed from a random sample will include the population parameter



    Life-world – an individual’s inner world of immediate experience



    Likert scale – a summated rating scale



    Line graph – a graph that relies on the drawing of one or more lines

    Loaded question – a question containing loaded or emotionally charged words



    Logic of significance testing – understanding and following the logical



    Longitudinal research – data are collected at multiple time points and comparisons are made across time



    Low-inference descriptors – description phrased very close to the participants’ accounts and the researchers’ field notes



    Lower limit – the smallest number on a confidence interval



    Main effect – the effect of one independent variable



    Manipulation – an intervention studied by an experimenter



    Margin of error – one half of the width of a confidence interval



    Master list – a list of all the codes used in a research study



    Maturation – any physical or mental change that occurs over time that affects performance on the dependent variable



    Maximum variation sampling – purposively selecting a wide range of cases



    Mean – the arithmetic average



    Measure of relative standing – provides information about where a score falls in relation to the other scores in the distribution of data



    Measure of central tendency – the single numerical value that is considered the most typical of the values of a quantitative variable



    Measure of variability – a numerical index that provides information about how spread out or how much variation is present



    Measurement – the act of measuring by assigning symbols or numbers to something according to a specific set of rules

    Median – the 50th percentile



    Median location – the numerical place where you can find the median in a set of order numbers



    Mediating variable – an intervening variable



    Memoing – recording reflective notes about what you are learning from the data



    Mental Measurements Yearbook – one of the primary sources of information about published tests



    Meta-analysis – a quantitative technique used to integrate and describe the results of a large number of studies



    Method of working hypotheses – attempting to identify all rival explanations



    Method of data collection – technique for physically obtaining data to be analyzed in a research study



    Methods triangulation – the use of multiple research methods



    Mixed purposeful sampling – the mixture of more than one sampling strategy



    Mode – the most frequently occurring number



    Moderator variable – a variable involved in an interaction effect; see interaction effect



    Mortality – A differential loss of participants from the various comparison groups



    Multigroup research design – a research design that includes more than one group of participants



    Multimethod research – the use of more than one research method



    Multiple operationalism – the use of several measures of a construct



    Multiple regression – regression based on one dependent variable and two or more independent variables



    Multiple time-series design – an interrupted time-series design that includes a control group to rule out a history effect



    Multiple-baseline design – a single-case experimental design in which the treatment condition is successively administered to different participants, or to the same participant in several settings, after baseline behaviors have been recorded for different periods of time



    Multiple-treatment interference -- occurs when participation in one treatment condition influences a person’s performance in another treatment condition



    Mutually exclusive – property that categories or intervals do not overlap



    Mutually exclusive categories – a set of categories that are separate or distinct



    n – the recommended sample size



    N – the population size



    Naturalistic observation – observation done in “real world” settings



    Naturalistic generalization – generalizing based on similarity



    Negative criticism – Establishing the reliability or authenticity and accuracy of the content of the documents and other sources used by the researcher



    Negative case sampling – selecting cases that disconfirm the researcher’s expectations and generalizations



    Negative correlation – two variables move in opposite directions



    Negative-case sampling – locating and examining cases that disconfirm the researcher’s expectations

    Negatively skewed – skewed to the left



    Network diagram – a diagram showing the direct links between variables or events over time



    Nominal scale – a scale of measurement that uses symbols or numbers to label, classify, or identify people or objects



    Nondirectional alternative hypothesis – an alternative hypothesis that includes the “not equal to” sign



    Normal distribution – a unimodal, symmetric, bell-shaped distribution that is the theoretical model of many variables



    Norms – the written and unwritten rules that specify appropriate group behavior



    Null hypothesis – a statement about a population parameter



    Numerical rating scale – a rating scale with anchored endpoints



    Observation – unobtrusive watching of behavioral patterns

    Observer-as-participant – researcher spends limited amount of time observing group members and tells members they are being studied



    Official documents – anything written or photographed by an organization



    One-group pretest-posttest design – a research design in which a treatment condition is administered to one group of participants after pretesting, but before posttesting on the dependent variable



    One-group pretest-posttest design – administering a posttest to a single group of participants after they have been given an experimental treatment condition



    One-group posttest-only design – administering a posttest to a single group of participants after they have been given an experimental treatment condition



    One-stage cluster sampling – a set of clusters is randomly selected and all of the elements in the selected clusters are included in the sample



    One-way analysis of variance – statistical test used to compare two or more group means



    Open coding – the first stage in grounded theory data analysis



    Open-ended question – a question that allows participants to respond in their own words



    Operationalism – representing constructs by a specific set of steps or operations



    Opportunistic sampling – selecting cases where the opportunity occurs



    Oral histories – based on interviews with a person who has had directed or indirect experience with or knowledge of the chosen topic



    Order effect – a sequencing effect that occurs from the order in which the treatment conditions are administered



    Ordinal scale – a rank-order scale of measurement



    Outlier – a number that is very atypical of the other numbers in a distribution



    Panel study – study where the same individuals are studied at successive points over time



    Parameter – a numerical characteristic of a population



    Partial correlation – used to examine the relationship between two quantitative variables controlling for one or more quantitative extraneous variables



    Partial publication – publishing several articles from the data collected in one large study; is generally not unethical for large studies



    Participant feedback – discussion of the researcher’s conclusions with the actual participants



    Participant-as-observer – researcher spends extended time with the group as an insider and tells members they are being studied



    Path coefficient – a quantitative index providing information about a direct effect



    Pattern matching – predicting a pattern of results and determining if the actual results fit the predicted pattern



    Peer review – discussing one’s interpretations and conclusions with one’s peers or colleagues



    Percentile ranks – scores that divide a distribution into 100 equal parts



    Percentile rank – the percentage of scores in a reference group that fall below a particular raw score



    Periodicity – the presence of a cyclical pattern in the sampling frame



    Personal documents – anything written or photographed for private purposes



    Personality – a multifaceted construct that does not have a generally agreed on definition



    Phenomenology – the description of one or more individuals’ consciousness and experience of a phenomenon



    Pilot test – a preliminary test of your questionnaire



    Point estimate – the estimated value of a population parameter



    Point estimation – the use of the value of a sample statistic as the estimate of the value of a population parameter



    Population – the complete set of cases; it’s the large group to which a researcher wants to generalize the sample results

    Population validity – the ability to generalize the study results to the individuals not included in the study



    Positive correlation – two variables move in the same direction



    Positive criticism – ensuring that the statements made or the meaning conveyed in the various sources is correct



    Positively skewed – skewed to the right



    Post hoc fallacy – making the argument that because A preceded B, A must have caused B



    Post hoc test – a follow-up test to the analysis of variance



    Posttest-only control-group design – administering a posttest to two randomly assigned groups of participants after one group has been administered the experimental treatment condition



    Practical significance – a conclusion made when a relationship is strong enough to be of practical importance



    Prediction – attempting to predict or forecast a phenomenon



    Predictive research – research focused on predicting the future status of one or more dependent variables based on one or more independent variables



    Predictive validity – validity evidence obtained from assessing the relationship between test scores collected at one point in time and criterion scores obtained at a later time



    Presence or absence technique – manipulating the independent variable by presenting one group the treatment condition and withholding it from the other group



    Presentism – the assumption that the present-day connotations of terms also existed in the past



    Pretest-posttest control-group design – a research design that administers a posttest to two randomly assigned groups of participants after both have been pretested and one of the groups has been administered the experimental treatment condition



    Primary source – a source in which the creator was a direct witness or in some other way directly involved or related to the event



    Primary data – original data collected as part of a research study



    Probabilistic cause – changes in variable A “tend” to produce changes in variable B; it’s a cause that usually produces an outcome



    Probability value – the probability of the result of your research study, or an even more extreme result, assuming that the null hypothesis is true



    Probability proportional to size – a type of two-stage cluster sampling where each cluster’s chance of being selected in stage one depends on its population size



    Probe – prompt to obtain response clarity or additional information



    Problem of induction – things that happened in the past may not happen in the future



    Problem – an interrogative sentence that asks about the relation that exists between two or more variables



    Proportional stratified sampling – type of stratified sampling where the sample proportions are made to be the same as the population proportions on the stratification variables



    Prospective study – another term applied to a panel study



    Purposive sampling – the researcher specifies the characteristics of the population of interest and locates individuals with those characteristics



    Qualitative observation – observing all potentially relevant phenomena



    Qualitative research – research relying primarily on the collection of qualitative data



    Quantitative interview – an interview providing qualitative data



    Quantitative observation – standardized observation



    Quantitative variable – a variable that varies in degree or amount



    Quantitative research – research relying primarily on the collection of quantitative data



    Quasi-experimental research design – an experimental research design that does not provide for full control of potential confounding variables primarily by not randomly assigning participants to comparison groups



    Questionnaire – a self-report data collection instrument filled out by research participant



    Quota sampling – the researcher determines the appropriate sample sizes or quotas for the groups identified as important and takes convenience samples from these groups



    Random assignment – randomly assigning a set of people to different groups; it’s a statistical control procedure that maximizes the probability that the comparison groups will be equated on all extraneous variables



    Range – the difference between the highest and lowest numbers



    Ranking – the ordering of responses into ranks



    Rating scale – a continuum of response choices



    Ratio scale – a scale of measurement that has a true zero point as well as all the characteristics of the nominal, ordinal, and interval scales



    Rationalism – idea that reason is the primary source of knowledge



    Reactivity – an alteration in performance that occurs as a result of being aware of participating in a study; it refers to changes occurring in people because they know they are being observed



    Reference group – the norm group used to determine the percentile ranks



    Reflexivity – self-reflection by the researcher on his or her biases and predispositions



    Regression analysis – a set of statistical procedures used to predict the values of a dependent variable based on the values of one or more independent variables



    Regression coefficient – the predicted change in Y given a one-unit changes in X



    Regression line – the line that best fits a pattern of observations



    Regression equation – the equation that defines the regression line



    Reliability – consistency or stability



    Repeated sampling – drawing many or all-possible samples from a population



    Repeated-measures design – a design in which all participants participate in all experimental treatment conditions



    Replication logic – the idea that the more times a research finding is shown to be true with different sets of people, the more confidence we can place in the finding and in generalizing beyond the original participants



    Replication – research examining the same variables with different people



    Representative sample – a sample that resembles the population



    Research design – the outline, plan, or strategy used to answer a research question



    Research ethics – a set of principles to guide and assist researchers in deciding which goals are most important and in reconciling conflicting values



    Research hypothesis – the hypothesis of interest to the researcher and the one he or she would like to see supported by the study results



    Research method – overall research design and strategy



    Research plan – the outline or plan that will be used in conducting the research study



    Research problem – see problem



    Researcher bias – obtaining results consistent with what the researcher wants to find



    Researcher-as-detective – metaphor applied to researcher when searching for cause and effect



    Response rate – the percentage of people in a sample that participate in a research study



    Response set – tendency to respond in a specific direction regardless of content



    Retrospective research – the researcher starts with the dependent variable and moves backward in time



    Retrospective questions – questions asking people to recall something from an earlier time



    RIE – an index of abstracts of research reports



    Rule of parsimony – selecting the most simple theory that works



    Sample – the set of elements taken from a larger population



    Sampling error – the difference between the value of a sample statistic and a population parameter

    Sampling frame – a list of all the elements in a population



    Sampling with replacement – it is possible for elements to be selected more than once



    Sampling without replacement – it is not possible for elements to be selected more than once



    Sampling interval – the population size divided by the desired sample size; it is symbolized by “k”



    Sampling distribution – the theoretical probability distribution of the values of a statistic that results when all possible random samples of a particular size are drawn from a population



    Sampling error – the difference between a sample statistic and the corresponding population parameter



    Sampling distribution of the mean – the theoretical probability distribution of the means of all possible random samples of a particular size drawn from a population



    Scatterplot – a graph used to depict the relationship between two quantitative variables



    Science – an approach for the generation of knowledge



    Secondary data – data originally collected at an earlier time by a different person for a different purpose



    Secondary source – a source that was created from primary sources, secondary sources, or some combination of the two



    Segmenting – dividing data into meaningful analytical units



    Selection – selecting participants for the various treatment groups that have different characteristics



    Selection by history interaction – occurs when the different comparison groups experience a different history event

    Selection by maturation interaction – occurs when the different comparison groups experience a different rate of change on a maturation variable



    Selection-maturation effect – when participants in one of two comparison groups grow or develop faster than participants in the other comparison group



    Selective coding – the final stage in grounded theory data analysis



    Semantic differential – a scaling technique where participants rate a series of objects or concepts



    Sequencing effects – biasing effects that can occur when each participant must participate in each experimental treatment condition



    Shared values – the culturally defined standards about what is good or bad or desirable or undesirable



    Shared beliefs – the specific cultural conventions or statements that people who share a culture hold to be true or false



    Significance level – the cutoff the researcher uses to decide when to reject the null hypothesis



    Significance testing – a commonly used synonym for hypothesis testing



    Simple random sample – a sample drawn by a procedure where every member of the population has an equal chance of being selected



    Simple case – when there is only one independent variable and one dependent variable



    Simple random sampling – the term usually used for sampling without replacement



    Simple case of correlational research – when there is one quantitative independent variable and one quantitative dependent variable



    Simple regression – regression based on one dependent variable and one independent variable



    Simple case of causal-comparative research – when there is one categorical independent variable and one quantitative dependent variable



    Single-case experimental designs – designs that use a single participant to investigate the effect of an experimental treatment condition



    Skewed – not symmetrical



    Snowball sampling – each research participant is asked to identify other potential research participants



    Social desirability response set – tendency to provide answers that are socially desirable



    Sourcing – information that identifies the source or attribution of the document



    Spearman-Brown formula – a statistical formula used for correcting the split-half reliability coefficient for the shortened test length created by splitting the full-length test into two equivalent halves



    Split-half reliability – a measure of the consistency of the scores obtained from two equivalent halves of the same test



    Spurious relationship – when the relationship between two variables is due to one or more third variables



    Standard error – the standard deviation of a sampling distribution



    Standard deviation – the square root of the variance



    Standard scores – scores that have been converted from one scale to another to have a particular mean and standard deviation



    Standardization – presenting the same stimulus to all participants



    Standardized open-ended interview – a set of open-ended questions are asked in a specific order and exactly as worded



    Starting point – a randomly selected number between one and k



    States – distinguishable, but less enduring ways in which people differ



    Static-group comparison design – comparing posttest performance of a group of participants who have been given an experimental treatment condition with a group that has not been given the experimental treatment condition



    Statistic – a numerical characteristic of a sample



    Statistical regression – the tendency of very high scores to become lower and very low scores to become higher on post testing



    Statistically significant – a research finding is probably not attributable to chance; it’s the claim made when the evidence suggests an observed result was probably not due to chance



    Stratification variable – the variable on which the population is divided



    Stratified sampling – dividing the population into mutually exclusive groups and then selecting a random sample from each group

    Structural equation modeling – see causal modeling



    Summated rating scale – a multi-item scale that has the responses for each person summed into a single score



    Summative evaluation – evaluation focused on determining overall effectiveness of the evaluation object



    Survey research – a term sometimes applied to non-experimental research based on questionnaires or interviews



    Synthesis – the selection, organization and analysis of the materials collected



    Systematic sample – a sample obtained by determining the sampling interval, selecting a random starting point between 1 and k, and then selecting every kith element



    t test for correlation coefficients – statistical test used to determine if a correlation coefficient is statistically significant



    t test for independent samples – statistical test used to determine if the difference between the means of two groups is statistically significant



    t test for regression coefficients – statistical test used to determine if a regression coefficient is statistically significant



    Table of random numbers – a list of numbers that fall in a random order



    Target population – the larger population to whom the study results are to be generalized



    Telephone interview – an interview conducted over the phone



    Temporal validity – The extent to which the study results can be generalized across time



    Test-retest reliability – a measure of the consistency of scores over time



    Testing – any change in scores obtained on the second administration of a test as a result of having previously taken the test



    Tests in Print – A primary source of information about published tests



    Theoretical sensitivity – when a researcher is effective at thinking about what kinds of data need to be collected and what aspects of already collected data are the most important for the grounded theory



    Theoretical validity – the degree to which a theoretical explanation fits the data



    Theoretical saturation – occurs when no new information or concepts are emerging from the data and the grounded theory has been validated



    Theory – an explanation or an explanatory system; a generalization or set of generalizations used systematically to explain some phenomenon



    Theory triangulation – the use of multiple theories and perspectives to help interpret and explain the data



    Think-aloud technique – has participants verbalize their thoughts and perceptions while engaged in an activity



    Third variable – a confounding extraneous variable



    Third variable problem – an observed relationship between two variables may be due to an extraneous variable



    Three necessary conditions – three things that must be present if you are to contend that causation has occurred



    Time interval sampling – checking for events during specific time intervals



    Transcription – transforming qualitative data into typed text



    Trend study – independent samples are taken from a population over time and the same questions are asked



    Two-stage cluster sampling – first a set of clusters is randomly selected and second a random sample of elements is drawn from each of the clusters selected in stage one



    Type I error – rejecting a true null hypothesis



    Type II error – failing to reject a false null hypothesis



    Type technique – manipulating the independent variable by varying the type of variable presented to the different comparison groups



    Typical case sampling – selecting what are believed to be average cases



    Typology – a classification system that breaks something down into different types or kinds



    Unrestricted sampling – the technical term used for sampling with replacement



    Upper limit – the largest number on a confidence interval



    Utilitarianism – an ethical approach that says judgments of the ethics of a study depend on the consequences the study has for the research participants and the benefits that may arise from the study



    Vagueness – uncertainty in the meaning of words or phrases



    Validation – the process of gathering evidence that supports and inference based on a test score or scores



    Validity coefficient – a correlation coefficient computed between test scores and criterion scores



    Validity – a judgment of the appropriateness of the interpretations, inferences, and actions made on the basis of a test score or scores



    Variable – a condition or characteristic that can take on different values or categories



    Variance – a measure of the average deviation from the mean in squared units



    Y-intercept – the point where the regression line crosses the Y-axis



    z-score – a raw score that has been transformed into standard deviation units
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…
  • Will1945 Will1945 National Researchists - Book by William Allan Kritsonis, PhD and Colleagues



    ABOUT THE AUTHORS

    ________________________________________



    William Allan Kritsonis, PhD is Editor-in-Chief of the NATIONAL FORUM JOURNALS. He is a tenured professor in the PhD Program in Educational Leadership at Prairie View A&M University/Member Texas A&M University System. He was a Visiting Lecturer (2005) at the Oxford Round Table, Oriel College in the University of Oxford, Oxford, ENGLAND. Dr. Kritsonis is also a Distinguished Alumnus (2004) at Central Washington University in the College of Education and Professional Studies, Ellensburg, Washington. He has authored or co-authored numerous articles and conducted several research presentations with students and colleagues in the field of education. Dr. Kritsonis has served education as a school principal, superintendent of schools, director of field experiences and student teaching, consultant, and professor.





    Kimberly Grantham Griffith, Ph.D., is Editor of THE LAMAR UNIVERSITY ELECTRONIC JOURNAL OF STUDENT RESEARCH. She is a tenured associate professor in the Department of Professional Pedagogy at Lamar University/Member Texas State University System. Dr. Griffith is also a Councilor (board member) for the At-Large Division, Council for Undergraduate Research (CUR). In April 2000, she received the prestigious Lamar University Merit Award for teaching excellence. Dr. Griffith serves on the editorial board of the Electronic Journal of Inclusive Education. She has co-authored numerous articles and conducted several research presentations with students and colleagues in the field of education.



    Cristian Bahrim, Ph.D., is an assistant professor in the Department of Chemistry and Physics at Lamar University and holds a joint-appointment in the Department of Electrical Engineering. He is (co-)author in several papers published in peer-reviewed journals/books and conferences’ proceedings. He conducted several research projects in the field of atomic physics, optics, lasers, astronomy and physics education. Since 2001, Dr. Bahrim has served as reviewer for the Journal of Physics of the Institute of Physics (England), and recently he joined the editorial board of “The Lamar University Electronic Research Journal of Student Research”. Dr. Bahrim received the M.S. degree in Physics from University of Bucharest in 1991 and the Ph.D. degree in Physics from University of Paris in 1997. He held a research associate position in Kansas State University (1999-2001) and he was research assistant in the Institute of Atomic Physics, Romania (1991-1998). He obtained two outstanding McNair Mentor awards in 2005. Since 2000, he was selected in several Marquis Who’s Who publications. Dr. Bahrim was the recipient of a French Government Scholarship (1991-1996).









    David E. Herrington, Ph.D., is an assistant professor in the Department of Educational Leadership and Counseling at Prairie View A&M University/Member Texas A&M University System. He has supervised more than 2,000 student research field-based projects. Dr. Herrington has co-authored numerous articles with students in the area of education. He believes that everyone uses statistical thinking and inductive reasoning in everyday life. Making students aware of their existing skills and knowledge in these areas provides them with a sense that, in many ways, statistical reasoning and scientific processes are familiar. The value of a statistics course comes from the development of the specialized vocabulary, participative data gathering methods, and data analysis techniques that can enhance or leverage existing conceptual frameworks that students bring into the learning process.



    Robert L. Marshall, EdD is the Senior National Editor of NATIONAL FORUM JOURNALS. He is a tenured professor at Western Illinois faculty in the Educational Leadership Department. His background in education spans over 25 years and includes teaching in secondary public schools, campus as well as district level administrative experience and ten year in higher education as a professor of Educational Leadership in the Texas A&M University System. Dr. Marshall's research interests are in the areas of distance education, secondary student success initiatives along with studies related to the principalship and superintendency in public schools.







    Copyright  2009 William Allan Kritsonis, Ph.D.

    ALL RIGHTS RESERVED/FOREVER
    5 years ago
    Are you sure you want to
    Your message goes here
    Processing…

110 of 47 previous next

Post Comment
Edit your comment

Education In Korea (Ppt) Education In Korea (Ppt) Presentation Transcript