This document provides information about research methodology tools. It discusses various tools used for data collection in educational research, including questionnaires, checklists, rating scales, attitude scales, interviews, inventories, and observation. It describes the purpose, characteristics, types, and effective use of each tool. It emphasizes the importance of selecting valid and reliable tools that are appropriate for the research purpose and collecting the desired information.
This document discusses various tools used for educational research. It identifies questionnaires, checklists, rating scales, scorecards, and attitude scales as major tools. It provides details on the characteristics, construction, uses, and limitations of each tool. Questionnaires collect standardized information through questions, while checklists record behaviors and ratings. Scorecards and rating scales evaluate qualities on a numerical scale. Attitude scales measure attitudes toward topics through statements along a continuum. Proper tool selection and construction are important for successful educational research.
Evaluation is the process of making judgements about the value or worth of an individual, program, or policy by collecting evidence and assessing progress towards goals. There are several tools used for evaluation, including observation, rating scales, interviews, and tests. Observation can provide direct information about an ongoing process. Rating scales allow for qualitative attributes to be judged quantitatively by describing varying degrees of performance. Interviews are used to understand perspectives and can be structured, semi-structured, or unstructured. The purpose of evaluation is to improve instruction, assess teachers and programs, and help students reach their potential.
Introduction – Observation – Self-Reporting – Anecdotal Records – Check List – Rating Scale – Types of Tests –Assessment Tools for Affective Domain – Attitude Scale – Motivation Scale – Interest Scale – Types of Test Items – Essay Type Questions – Short Answer Question – Objective Type Questions – Principles for Constructing Test Items
This document discusses checklists, which are lists of categories that respondents check to indicate presence or absence. Checklists are useful for collecting facts in educational surveys and observational studies. They can be used to record behavior, appraise educational aspects like schools/instruction, and rate personality and interests. When constructing a checklist, items should be clearly defined and arranged logically. Checklist data is analyzed by tabulating responses, calculating frequencies, percentages, and other statistics. Checklists allow students to measure their own behavior and development but only indicate presence/absence, not degree. They have limitations but are easy to use and frame when wanting to check "yes/no" for a skill or ability.
Classroom Based Assessment Tools and Techniques 27-09-2022.pptNasirMahmood976516
This document discusses various methods and purposes of classroom-based assessment. It defines assessment as the systematic process of documenting and using data on student knowledge, skills, attitudes, and beliefs to improve learning. The document outlines different types of assessments including achievement tests, psychological tests, and performance tests. It also discusses formative assessment, which provides feedback to help students improve, versus summative assessment, which evaluates performance against standards. Finally, the document details specific formative assessment techniques teachers can use like interviews, checklists, observations, and case studies.
This is the PPT of method of data collection....
It include how we collect data from samples..
It important for researchers and bsc. Nursing students...
This PPT includes methods of data collection like interview, observations, questioning and biophysiological methods .📚
It also include reliability and validity of research tool...
There is concept of data collection five W's
What
Where
With whom
When
Why...😊
This document discusses various tools used for data collection, including observation schedules, interview guides, questionnaires, rating scales, checklists, and document schedules. It describes the purpose and construction of each tool. Key tools include observation schedules for recording observations, interview guides for open-ended interviews, questionnaires for surveys, and rating scales for measuring attitudes. Proper construction of data collection tools is important for gathering accurate data and involves determining data needs, pre-testing drafts, and specifying procedures.
Presentation on research methodologiesBilal Naqeeb
The document provides an overview of research methodologies. It defines research as an organized and systematic way of finding answers to questions. It notes that research is systematic because there are definite procedures and steps followed, and organized because there is a planned structure. The main purpose of research is to find answers to questions. The document then discusses different types of research such as primary and secondary research, as well as pure, applied, scientific and social research. It also outlines tools and techniques used for data collection in research such as surveys, experiments, interviews and case studies. Finally, it discusses key research concepts like variables, hypotheses, sampling, questionnaires and how to design good questions.
This document discusses various tools used for educational research. It identifies questionnaires, checklists, rating scales, scorecards, and attitude scales as major tools. It provides details on the characteristics, construction, uses, and limitations of each tool. Questionnaires collect standardized information through questions, while checklists record behaviors and ratings. Scorecards and rating scales evaluate qualities on a numerical scale. Attitude scales measure attitudes toward topics through statements along a continuum. Proper tool selection and construction are important for successful educational research.
Evaluation is the process of making judgements about the value or worth of an individual, program, or policy by collecting evidence and assessing progress towards goals. There are several tools used for evaluation, including observation, rating scales, interviews, and tests. Observation can provide direct information about an ongoing process. Rating scales allow for qualitative attributes to be judged quantitatively by describing varying degrees of performance. Interviews are used to understand perspectives and can be structured, semi-structured, or unstructured. The purpose of evaluation is to improve instruction, assess teachers and programs, and help students reach their potential.
Introduction – Observation – Self-Reporting – Anecdotal Records – Check List – Rating Scale – Types of Tests –Assessment Tools for Affective Domain – Attitude Scale – Motivation Scale – Interest Scale – Types of Test Items – Essay Type Questions – Short Answer Question – Objective Type Questions – Principles for Constructing Test Items
This document discusses checklists, which are lists of categories that respondents check to indicate presence or absence. Checklists are useful for collecting facts in educational surveys and observational studies. They can be used to record behavior, appraise educational aspects like schools/instruction, and rate personality and interests. When constructing a checklist, items should be clearly defined and arranged logically. Checklist data is analyzed by tabulating responses, calculating frequencies, percentages, and other statistics. Checklists allow students to measure their own behavior and development but only indicate presence/absence, not degree. They have limitations but are easy to use and frame when wanting to check "yes/no" for a skill or ability.
Classroom Based Assessment Tools and Techniques 27-09-2022.pptNasirMahmood976516
This document discusses various methods and purposes of classroom-based assessment. It defines assessment as the systematic process of documenting and using data on student knowledge, skills, attitudes, and beliefs to improve learning. The document outlines different types of assessments including achievement tests, psychological tests, and performance tests. It also discusses formative assessment, which provides feedback to help students improve, versus summative assessment, which evaluates performance against standards. Finally, the document details specific formative assessment techniques teachers can use like interviews, checklists, observations, and case studies.
This is the PPT of method of data collection....
It include how we collect data from samples..
It important for researchers and bsc. Nursing students...
This PPT includes methods of data collection like interview, observations, questioning and biophysiological methods .📚
It also include reliability and validity of research tool...
There is concept of data collection five W's
What
Where
With whom
When
Why...😊
This document discusses various tools used for data collection, including observation schedules, interview guides, questionnaires, rating scales, checklists, and document schedules. It describes the purpose and construction of each tool. Key tools include observation schedules for recording observations, interview guides for open-ended interviews, questionnaires for surveys, and rating scales for measuring attitudes. Proper construction of data collection tools is important for gathering accurate data and involves determining data needs, pre-testing drafts, and specifying procedures.
Presentation on research methodologiesBilal Naqeeb
The document provides an overview of research methodologies. It defines research as an organized and systematic way of finding answers to questions. It notes that research is systematic because there are definite procedures and steps followed, and organized because there is a planned structure. The main purpose of research is to find answers to questions. The document then discusses different types of research such as primary and secondary research, as well as pure, applied, scientific and social research. It also outlines tools and techniques used for data collection in research such as surveys, experiments, interviews and case studies. Finally, it discusses key research concepts like variables, hypotheses, sampling, questionnaires and how to design good questions.
Examination and Evaluation-ppt presentation.pptxAbdulakilMuanje
The document defines key concepts related to assessment including tests, measurements, evaluation, and assessment. It outlines the purposes of assessment as improving student learning and program/institutional improvement. It also describes different frames of reference for interpreting test scores such as ability-referenced, growth-referenced, norm-referenced, and criterion-referenced. The document further discusses types of assessments including formative vs summative and screening vs diagnostic. It also covers Bloom's taxonomy of educational objectives for the cognitive, affective and psychomotor domains.
The document discusses data collection techniques for research, focusing on questionnaires and surveys. It describes key aspects of developing questionnaires such as identifying variables, indicators, and question structures. Questionnaires can collect discrete, continuous, dependent, independent and control variables. The document also discusses survey design, including scaling techniques, and notes that surveys provide representative summaries of populations to answer qualitative research questions. Well-designed questionnaires and surveys are important tools for educational and social science research.
Non Standarized Tests (Using Nursing Approch).pptxvirengeeta
This document discusses various types of non-standardized tests used to measure attitudes, including attitude scales, anecdotal records, and sociometry. It provides details on several types of attitude scales such as Likert scales, semantic differential scales, and Q-sort techniques. It describes how anecdotal records involve written descriptions of significant behavioral incidents observed by teachers. Sociometry is introduced as a basic approach for studying relationships between individuals or groups.
This document discusses various methods of data collection in educational research. It describes data collection as involving deciding when, who, how, and what data to collect. Common research instruments include questionnaires, interviews, observation, existing data, Likert scales, semantic differential scales, and opinionnaires. Questionnaires can be self-administered or mailed but have low response rates. Interviews are conducted in-person but are time-consuming. Observation directly watches participants. The document provides guidelines for developing and using various data collection instruments and methods.
This document discusses key elements and concepts in research methods. It defines concepts and constructs, and explains that concepts simplify observations while constructs are abstract ideas made up of dimensions. It also defines independent and dependent variables, and discrete and continuous variables. The document outlines the differences between reliability and validity in research instruments. It describes four types of validity and discusses advantages and disadvantages of survey research methods. It provides guidance on sampling methods, question construction, and tools used in data collection.
The document discusses various data collection techniques used in educational research, including observation, interviews, and tests. It provides details on how each technique is conducted and its advantages and limitations. Specifically, it describes overt and covert observation methods, structured and unstructured interviews, and different types of tests used to measure achievement, aptitude, and personality. It also highlights important considerations for properly planning and conducting observations and interviews.
The document discusses various innovative assessment techniques that can be used in education. It begins by defining assessment and explaining its purposes, including diagnostic, formative, and summative assessment. It then describes different types of classroom assessment techniques such as exit cards, peer assessment, journal reflections, concept maps, and Socratic seminars. The document also covers personality, aptitude, and achievement tests as well as interpreting test results using norms, criteria, and performance standards. Overall, the document provides an overview of the meaning and goals of assessment along with specific innovative techniques that can be implemented in the classroom.
The document discusses appropriate assessment methods for determining if students have achieved desired learning outcomes. It describes several common assessment types, including written response instruments, product rating scales, performance tests, oral questioning, observation, and self-reporting. Effective assessment methods match the educational objectives and can include objective tests, essays, examinations, checklists, and more. Teacher observation and questioning are also important for assessment. The document also discusses developing tools to assess affective domains and outlines several methods for doing so, including student self-reports, teacher observations, and peer ratings.
The document discusses various methods of performance assessment including performance testing, anecdotal records, checklists, and rating scales. Performance testing directly observes student performance on tasks requiring skills like critical thinking. Anecdotal records involve recording observations of students in narrative form. Checklists are lists to record presence/absence of behaviors. Rating scales provide qualitative and quantitative judgments of student attributes on a scale. These methods allow measuring achievements, habits, behaviors and attitudes beyond traditional tests.
This document discusses qualitative research methods, focusing on interviews. It defines qualitative research methodology as suitable for investigating new fields of study or identifying important issues. The most common qualitative methods are interviews and observation, which allow for an in-depth understanding of issues through textual interpretation. Interviews are described as semi-structured, unstructured, or structured. Semi-structured interviews use a standard set of questions but also allow for additional clarifying questions. The advantages of interviews include feedback, clarification, and probing complex answers, while the disadvantages include costs, scheduling challenges, and potential lack of anonymity or influence of interviewer style.
The document discusses various methods for assessing affective learning outcomes like attitudes, values, and feelings. It describes three main methods: teacher observation using structured or unstructured methods, student self-reports like interviews and questionnaires, and peer ratings. Specific assessment tools are also outlined, such as rating scales, checklists, and surveys. Key considerations for assessing affective domains include the transient nature of emotions, using multiple approaches, and determining if individual or group results are needed.
This document discusses different scales of measurement used in research including nominal, ordinal, interval, and ratio scales. It provides examples and characteristics of each scale. Nominal scales involve categories without order, ordinal scales involve ordered categories without defined intervals, interval scales have equal intervals but an arbitrary zero point, and ratio scales have an absolute zero point and allow calculations such as proportions. The document also covers topics such as questionnaire design, open-ended and closed-ended question types, and methods of administering questionnaires.
This document discusses various tools and techniques used for data collection in research. It defines research tools as instruments used by researchers to measure what they intend to study. Some major tools discussed are questionnaires, checklists, rating scales, attitude scales, observation, interviews, psychological tests, and sociometry. The document provides examples and purposes of each tool while emphasizing the importance of selecting reliable and valid tools that align with the research questions.
This document provides an overview of research methodology. It defines research and lists its key characteristics as being controlled, rigorous, systematic, valid, verifiable, empirical and critical. It discusses the aims and objectives of research, noting that aims describe desired outcomes while objectives detail specific steps. The document also outlines criteria for good research including using appropriate philosophies and procedures. Upcoming sections will cover research methodology and describing methods in detail.
This document provides information on data, sources of data, purposes of data, data collection methods, questionnaires, and rating scales. It discusses the different types of data, primary and secondary sources of data, and purposes of collecting data such as testing hypotheses and describing samples. Methods of data collection include questionnaires, interviews, and observation. Questionnaires can be open-ended or closed-ended. Rating scales are used to quantify observations and come in formats like graphic, descriptive, and numerical scales. Selection of data collection methods depends on factors like the research subjects and purpose.
Difference between quantitative and qualitative researchSafi Nawam
Researchers usually work within a paradigm that is consistent with their world view, and that gives rise to the types of question that excite their curiosity.
The maturity of the concept of interest also may lead to one or the other paradigm: when little is known about a topic, a qualitative approach is often more fruitful than a quantitative one
Attitude scale and critical incident techniqueShaells Joshi
The individual experienced many career failures and losses over multiple years including losing jobs, businesses and elections. However, after facing defeat in many areas of his life, he was eventually elected as President of the United States. Prior to achieving the presidency, he faced numerous setbacks such as losing his job, failing in business, his wife passing away, and being defeated in multiple campaigns for public office over many years.
Keys to success with assessment and evaluationFrank Cervone
Evaluation and assessment are grounded in social research methods. Proper research methodology and qualified personnel are critical to conducting effective evaluations. There are different types of evaluations - formative evaluations aim to improve programs, while summative evaluations examine outcomes. Effective evaluation requires understanding the goals and pretesting data collection instruments to ensure they provide useful information. Focus groups and understanding customer needs can also guide evaluation efforts.
Inquiry forms questionnaire, opinionnaire, attitude scale, checklist, rating...DrGavisiddappa Angadi
The tools of research in education can be classified broadly into the following categories:
A. Psychological Tests
Achievement Test
Aptitude Test
Intelligence Test
Creativity Tests
Interest inventory
Behavioral Procedures
Neuropsychological Tests
Personality measures etc.
B. Inquiry forms
Questionnaire
Checklist
Score-card
Schedule
Rating Scale
Opinionnaire
Attitude Scale
C. Observation
D. Interview
E. Sociometric Techniques.
This document discusses various methods for collecting data, including observation, interviews, and focus group discussions. It provides details on each method, such as observation being either participant or non-participant, interviews being structured or unstructured, and focus groups allowing participants to discuss topics in a group setting. The document also distinguishes between primary and secondary data sources, with primary sources involving direct collection of data and secondary sources involving published data.
This document discusses various models and assessments of moral education. It defines moral education as helping children develop beliefs about right and wrong to guide their behavior. It then describes several models of moral education:
- The Rationale Building Model focuses on clarifying values and how teachers make moral decisions.
- The Consideration Model encourages living for others as a way to liberate oneself from selfishness.
- The Value Clarification Model sees values as personal opinions shaped by increasing self-awareness in a morally pluralistic society.
- The Social Action Model teaches influencing policy through environmental actions.
It also discusses Kohlberg's theory of moral development in three stages from following rules to upholding principles and
This document discusses value education and human values. It defines value education as the process by which a person develops abilities, attitudes and behaviors aligned with their society's positive values. Some key human values discussed include respect, acceptance and empathy toward other humans. The document also examines concepts like truthfulness, sacrifice, sincerity and character formation that contribute to developing a positive personality. It analyzes challenges faced by adolescents, including emotional and behavioral changes, and the need for gender sensitivity.
Examination and Evaluation-ppt presentation.pptxAbdulakilMuanje
The document defines key concepts related to assessment including tests, measurements, evaluation, and assessment. It outlines the purposes of assessment as improving student learning and program/institutional improvement. It also describes different frames of reference for interpreting test scores such as ability-referenced, growth-referenced, norm-referenced, and criterion-referenced. The document further discusses types of assessments including formative vs summative and screening vs diagnostic. It also covers Bloom's taxonomy of educational objectives for the cognitive, affective and psychomotor domains.
The document discusses data collection techniques for research, focusing on questionnaires and surveys. It describes key aspects of developing questionnaires such as identifying variables, indicators, and question structures. Questionnaires can collect discrete, continuous, dependent, independent and control variables. The document also discusses survey design, including scaling techniques, and notes that surveys provide representative summaries of populations to answer qualitative research questions. Well-designed questionnaires and surveys are important tools for educational and social science research.
Non Standarized Tests (Using Nursing Approch).pptxvirengeeta
This document discusses various types of non-standardized tests used to measure attitudes, including attitude scales, anecdotal records, and sociometry. It provides details on several types of attitude scales such as Likert scales, semantic differential scales, and Q-sort techniques. It describes how anecdotal records involve written descriptions of significant behavioral incidents observed by teachers. Sociometry is introduced as a basic approach for studying relationships between individuals or groups.
This document discusses various methods of data collection in educational research. It describes data collection as involving deciding when, who, how, and what data to collect. Common research instruments include questionnaires, interviews, observation, existing data, Likert scales, semantic differential scales, and opinionnaires. Questionnaires can be self-administered or mailed but have low response rates. Interviews are conducted in-person but are time-consuming. Observation directly watches participants. The document provides guidelines for developing and using various data collection instruments and methods.
This document discusses key elements and concepts in research methods. It defines concepts and constructs, and explains that concepts simplify observations while constructs are abstract ideas made up of dimensions. It also defines independent and dependent variables, and discrete and continuous variables. The document outlines the differences between reliability and validity in research instruments. It describes four types of validity and discusses advantages and disadvantages of survey research methods. It provides guidance on sampling methods, question construction, and tools used in data collection.
The document discusses various data collection techniques used in educational research, including observation, interviews, and tests. It provides details on how each technique is conducted and its advantages and limitations. Specifically, it describes overt and covert observation methods, structured and unstructured interviews, and different types of tests used to measure achievement, aptitude, and personality. It also highlights important considerations for properly planning and conducting observations and interviews.
The document discusses various innovative assessment techniques that can be used in education. It begins by defining assessment and explaining its purposes, including diagnostic, formative, and summative assessment. It then describes different types of classroom assessment techniques such as exit cards, peer assessment, journal reflections, concept maps, and Socratic seminars. The document also covers personality, aptitude, and achievement tests as well as interpreting test results using norms, criteria, and performance standards. Overall, the document provides an overview of the meaning and goals of assessment along with specific innovative techniques that can be implemented in the classroom.
The document discusses appropriate assessment methods for determining if students have achieved desired learning outcomes. It describes several common assessment types, including written response instruments, product rating scales, performance tests, oral questioning, observation, and self-reporting. Effective assessment methods match the educational objectives and can include objective tests, essays, examinations, checklists, and more. Teacher observation and questioning are also important for assessment. The document also discusses developing tools to assess affective domains and outlines several methods for doing so, including student self-reports, teacher observations, and peer ratings.
The document discusses various methods of performance assessment including performance testing, anecdotal records, checklists, and rating scales. Performance testing directly observes student performance on tasks requiring skills like critical thinking. Anecdotal records involve recording observations of students in narrative form. Checklists are lists to record presence/absence of behaviors. Rating scales provide qualitative and quantitative judgments of student attributes on a scale. These methods allow measuring achievements, habits, behaviors and attitudes beyond traditional tests.
This document discusses qualitative research methods, focusing on interviews. It defines qualitative research methodology as suitable for investigating new fields of study or identifying important issues. The most common qualitative methods are interviews and observation, which allow for an in-depth understanding of issues through textual interpretation. Interviews are described as semi-structured, unstructured, or structured. Semi-structured interviews use a standard set of questions but also allow for additional clarifying questions. The advantages of interviews include feedback, clarification, and probing complex answers, while the disadvantages include costs, scheduling challenges, and potential lack of anonymity or influence of interviewer style.
The document discusses various methods for assessing affective learning outcomes like attitudes, values, and feelings. It describes three main methods: teacher observation using structured or unstructured methods, student self-reports like interviews and questionnaires, and peer ratings. Specific assessment tools are also outlined, such as rating scales, checklists, and surveys. Key considerations for assessing affective domains include the transient nature of emotions, using multiple approaches, and determining if individual or group results are needed.
This document discusses different scales of measurement used in research including nominal, ordinal, interval, and ratio scales. It provides examples and characteristics of each scale. Nominal scales involve categories without order, ordinal scales involve ordered categories without defined intervals, interval scales have equal intervals but an arbitrary zero point, and ratio scales have an absolute zero point and allow calculations such as proportions. The document also covers topics such as questionnaire design, open-ended and closed-ended question types, and methods of administering questionnaires.
This document discusses various tools and techniques used for data collection in research. It defines research tools as instruments used by researchers to measure what they intend to study. Some major tools discussed are questionnaires, checklists, rating scales, attitude scales, observation, interviews, psychological tests, and sociometry. The document provides examples and purposes of each tool while emphasizing the importance of selecting reliable and valid tools that align with the research questions.
This document provides an overview of research methodology. It defines research and lists its key characteristics as being controlled, rigorous, systematic, valid, verifiable, empirical and critical. It discusses the aims and objectives of research, noting that aims describe desired outcomes while objectives detail specific steps. The document also outlines criteria for good research including using appropriate philosophies and procedures. Upcoming sections will cover research methodology and describing methods in detail.
This document provides information on data, sources of data, purposes of data, data collection methods, questionnaires, and rating scales. It discusses the different types of data, primary and secondary sources of data, and purposes of collecting data such as testing hypotheses and describing samples. Methods of data collection include questionnaires, interviews, and observation. Questionnaires can be open-ended or closed-ended. Rating scales are used to quantify observations and come in formats like graphic, descriptive, and numerical scales. Selection of data collection methods depends on factors like the research subjects and purpose.
Difference between quantitative and qualitative researchSafi Nawam
Researchers usually work within a paradigm that is consistent with their world view, and that gives rise to the types of question that excite their curiosity.
The maturity of the concept of interest also may lead to one or the other paradigm: when little is known about a topic, a qualitative approach is often more fruitful than a quantitative one
Attitude scale and critical incident techniqueShaells Joshi
The individual experienced many career failures and losses over multiple years including losing jobs, businesses and elections. However, after facing defeat in many areas of his life, he was eventually elected as President of the United States. Prior to achieving the presidency, he faced numerous setbacks such as losing his job, failing in business, his wife passing away, and being defeated in multiple campaigns for public office over many years.
Keys to success with assessment and evaluationFrank Cervone
Evaluation and assessment are grounded in social research methods. Proper research methodology and qualified personnel are critical to conducting effective evaluations. There are different types of evaluations - formative evaluations aim to improve programs, while summative evaluations examine outcomes. Effective evaluation requires understanding the goals and pretesting data collection instruments to ensure they provide useful information. Focus groups and understanding customer needs can also guide evaluation efforts.
Inquiry forms questionnaire, opinionnaire, attitude scale, checklist, rating...DrGavisiddappa Angadi
The tools of research in education can be classified broadly into the following categories:
A. Psychological Tests
Achievement Test
Aptitude Test
Intelligence Test
Creativity Tests
Interest inventory
Behavioral Procedures
Neuropsychological Tests
Personality measures etc.
B. Inquiry forms
Questionnaire
Checklist
Score-card
Schedule
Rating Scale
Opinionnaire
Attitude Scale
C. Observation
D. Interview
E. Sociometric Techniques.
This document discusses various methods for collecting data, including observation, interviews, and focus group discussions. It provides details on each method, such as observation being either participant or non-participant, interviews being structured or unstructured, and focus groups allowing participants to discuss topics in a group setting. The document also distinguishes between primary and secondary data sources, with primary sources involving direct collection of data and secondary sources involving published data.
This document discusses various models and assessments of moral education. It defines moral education as helping children develop beliefs about right and wrong to guide their behavior. It then describes several models of moral education:
- The Rationale Building Model focuses on clarifying values and how teachers make moral decisions.
- The Consideration Model encourages living for others as a way to liberate oneself from selfishness.
- The Value Clarification Model sees values as personal opinions shaped by increasing self-awareness in a morally pluralistic society.
- The Social Action Model teaches influencing policy through environmental actions.
It also discusses Kohlberg's theory of moral development in three stages from following rules to upholding principles and
This document discusses value education and human values. It defines value education as the process by which a person develops abilities, attitudes and behaviors aligned with their society's positive values. Some key human values discussed include respect, acceptance and empathy toward other humans. The document also examines concepts like truthfulness, sacrifice, sincerity and character formation that contribute to developing a positive personality. It analyzes challenges faced by adolescents, including emotional and behavioral changes, and the need for gender sensitivity.
Moral development refers to how children develop concepts of right and wrong based on social and cultural norms. Piaget and Freud proposed influential theories of moral development. Piaget viewed it as a constructivist process where interaction and thought build moral concepts. Freud believed moral development occurs when selfish desires are replaced by social values. Kohlberg's theory identified six stages of moral reasoning organized into pre-conventional, conventional, and post-conventional levels of increasing complexity. Social learning theory also views moral judgments as rule-governed behaviors acquired through observation and social interaction.
Value education is important for national and global development. There are various categories of values including constitutional, social, professional, religious, aesthetic, and environmental values. Key values discussed in the document include democracy, socialism, secularism, equality, justice, liberty, fraternity, honesty, helpfulness, universal brotherhood, knowledge thirst, sincerity, regularity, punctuality, integrity, tolerance, wisdom, character, love and appreciation of literature and fine arts, national integration, international understanding, and humanistic values like peace. The document also discusses issues like conflict of cross-cultural influences and the need for cross-border education.
This document discusses various concepts related to research methodology including:
1. Research problems can involve areas that need improvement or difficulties that need to be eliminated. Sources of research topics include personal experiences, literature, theories, previous research, and more.
2. Factors determining the research problem include ethical issues, significance, feasibility, and availability of subjects. Criteria for selecting topics include novelty, interest, practical/theoretical value, and more.
3. The statement of the problem should include a clear problem statement, evidence it exists, probable causes, and a specific research question.
This document discusses research paradigms and methodologies. It covers the key differences between quantitative and qualitative paradigms, including their characteristics, types of research designs, and merits and demerits. Quantitative designs include experimental and non-experimental approaches. Qualitative designs covered are case study, phenomenology, ethnography, and grounded theory. The document also discusses analytical, mixed methods, action research, and historical research designs.
This document discusses value education and its importance. It defines values as individual beliefs that guide behavior. Value education aims to foster positive values and attitudes in students. Values can be taught through both integrated and curricular approaches. The integrated approach incorporates values indirectly through various subjects and activities, while the curricular approach directly teaches values through textbooks and specified periods. Family, teachers, peers, community, and media all play important roles in shaping values. Value education is needed in contemporary society to address issues like crime and promote respect.
The document discusses instructional design and its role. It begins by clarifying related terms like instructional design, instructional systems design, and systems approach to training. It then defines instructional design as the systematic process of translating learning principles into instructional plans and materials to ensure quality instruction. The role of instructional design is to identify performance problems, determine goals and learner needs, develop strategies to meet them, and assess learning outcomes. Finally, it notes that the systematic approach of instructional design ensures training needs are identified, materials are well-designed, appropriate strategies are used, and learning is evaluated.
This document provides an overview of quantitative educational research methods. It discusses research paradigms and the quantitative and qualitative paradigms. The quantitative paradigm uses measurement and experimental methods while the qualitative paradigm takes a naturalistic approach. Various research designs are described including experimental, non-experimental, case study, phenomenology, ethnography, and grounded theory. The merits and limitations of both quantitative and qualitative methods are also summarized.
This document provides an overview of research methodology. It defines research and discusses the characteristics of educational research. There are three main types of research: basic research, applied research, and action research. Research aims to systematically investigate problems to develop generalizable knowledge or solve practical issues. It faces certain limitations but provides a scientific approach to understanding various topics.
This document discusses various audio-visual media used in education, including radio, television, and satellite channels. It describes All India Radio (AIR), the national public radio broadcaster of India, and Gyan Vani, IGNOU's educational radio channel which was broadcast through AIR but is now off air due to financial issues. It also discusses the Satellite Instructional Television Experiment (SITE) project from 1975-1976 which used satellites to broadcast educational programs to rural villages. Finally, it mentions the educational television channel Gyan Darshan and the EDUSAT satellite launched in 2004 to provide distance education across India.
This document discusses various theories of human growth and development. It begins by defining growth and development, noting that growth refers to quantitative physical changes while development refers to qualitative functional changes. It then outlines several major theories including Piaget's stages of cognitive development, Freud's psychosexual stages, and Kohlberg's stages of moral development. The document also discusses factors that influence growth and development such as heredity and environment. It notes that individuals differ in physical, mental, emotional, and behavioral characteristics and these differences can be measured to understand intelligence and other traits. Teachers should consider these individual differences to plan effective instruction.
This document provides an overview of various learning theories including:
- Watson's behaviorism which focuses on observable behaviors rather than internal processes.
- Guthrie's contiguity theory which states that learning occurs through association between stimuli and responses.
- Hull's drive reduction theory which proposes that drives create arousal states that motivate learning.
- Tolman's purposivism which viewed learning as purposeful and involving cognition, not just responses.
- Kurt Lewin's field theory which examines patterns of interaction between individuals and their environments.
- Bandura's social learning theory which emphasizes observational learning and modeling.
- Vygotsky's social constructivism which proposes that knowledge is socially constructed through interactions.
1. The document discusses several major schools of psychology including structuralism, functionalism, behaviorism, humanism, and constructivism.
2. It provides details on the key figures, methods, and focuses of each school. Structuralism was founded by Wundt and Titchener and used introspection. Functionalism was led by William James and focused on adaptation.
3. Behaviorism, proposed by Watson, views behavior as formed by associations between stimuli and responses. Humanism, championed by Rogers and Maslow, sees humans as inherently good and growth-seeking. Constructivism believes people actively construct their own knowledge.
The document provides an overview of research methods in education. It defines key terms like research, defines different types of educational research including basic research, applied research, and action research. It discusses the importance and purpose of educational research, as well as the various sources, characteristics, and steps involved in the scientific research process. The document also addresses topics like literature reviews, assumptions in research, and criteria for selecting research topics.
The document discusses various aspects of growth and development from an educational psychology perspective. It defines growth and development, discusses the differences between the two, and outlines several major theories of development including those proposed by Piaget, Freud, and Kohlberg. The document also examines factors that influence growth and development, dimensions of development, individual differences, and how those differences can be measured and accounted for in instructional processes.
The document provides an overview of the major schools of psychology including:
- Structuralism (1879) founded by Wilhelm Wundt focused on identifying the structures of the mind.
- Functionalism (1898) founded by William James expanded psychology to include research on emotions and behavior.
- Behaviorism focused on observable behaviors and conditioned responses.
- Humanism (1960) founded by Carl Rogers and Abraham Maslow focused on self-actualization and the inherent goodness of human nature.
- Constructivism holds that people actively construct their own knowledge based on experiences.
- Psychoanalysis founded by Sigmund Freud focused on the unconscious mind and drives.
This document discusses various topics related to distance learning and the use of information and communication technologies (ICT). It addresses how ICT has made distance learning delivery faster and more flexible through computer-based and internet-based technologies. ICT has also improved and increased access to education. The effective integration of ICT requires consideration of factors like accessibility, cost-effectiveness, human acceptance, and pedagogical suitability. The document also discusses network infrastructure, computing infrastructure, software, internet services, security, and other technical aspects that support distance learning delivery through ICT.
This document discusses cost benefit analysis in educational planning. It defines key concepts like unit cost, capital cost, recurring vs non-recurring costs, opportunity cost, and discusses how cost analysis can inform decision making and resource allocation in education. It also compares approaches like cost-benefit analysis vs cost-effectiveness analysis and discusses theories like signaling theory vs human capital theory as they relate to returns on educational investments.
Educational planning involves systematic analysis and decision making to achieve educational goals efficiently. It identifies needs and allocates resources. Challenges include underfunding, inadequate facilities and teachers. Effective planning coordinates educational development and improves quality and access to education. Budgeting details resources needed to implement educational plans. India previously used Five-Year Plans coordinated by the Planning Commission to guide economic and social development, including in education, but no longer has formal national plans.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
2. TOOL OF RESEARCH
A researcher requires many data – gathering
tools or techniques. Tests are the tools of
measurement and it guides the researcher in
data collection and also in evaluation.
Tools may vary in complexity, interpretation,
design and administration. Each tool is suitable
for the collection of certain type of
information.
3. The selection of suitable instruments or tools
is of vital importance for successful research.
Different tools are suitable for collecting
various kinds of information for various
purposes.
The research worker may use one or more of
the tools in combination for his purpose.
The systematic way and procedure by which a
complex or scientific task is accomplished is
known as the technique.
4. MAJOR TOOLS OF RESEARCH IN EDUCATION
• Inquiry forms
– Questionnaire
– Checklist
– Score-card
– Schedule
– Rating Scale
– Opinionnaire
– Attitude Scale
• Observation
• Interview
• Sociometry
• Psychological Tests
– Achievement Test
– Aptitude Test
– Intelligence Test
– Interest inventory
– Personality measures etc.
5. QUESTIONNAIRE
“A questionnaire is a systematic compilation of
questions that are submitted to a sampling of
population from which information is desired.”
The questionnaire is a form prepared and
distributed to secure responses to certain
questions. It is a device for securing answers to
questions by using a form which the respondent
will fill by himself.
It is a systematic compilation of questions. It is
an important instrument being used to gather
information from widely scattered sources.
6. Characteristics of Good Questionnaire
It deals with an important or significant topic.
Its significance is carefully stated on the questionnaire itself or
on its covering letter.
It seeks only that data which cannot be obtained from the
resources like books, reports and records.
It is as short as possible, only long enough to get the essential
data.
It is attractive in appearance, nearly arranged and clearly
duplicated or printed.
Directions are clear and complete, important terms are clarified.
The questions are objective, with no clues, hints or suggestions.
Questions are presented in a order from simple to complex.
Double negatives, adverbs and descriptive adjectives are
avoided.
Double barreled questions or putting two questions in one
question are also avoided.
7. TYPES OF QUESTIONNAIRE
Questionnaire can be of various type on the basis
of it’s preparation.
Structured v/s Non Structured
Closed v/s Open
Fact v/s Opinion
Positive v/s negative
8. DESIGNS OF QUESTIONNAIRE
After construction of questions on the basis of
it’s characteristics it should be designed with
some essential routines.
Background information about the questionnaire.
Instructions to the respondent.
The allocation of serial numbers and
Coding Boxes.
9. MERITS & DEMERITS OF QUESTIONNAIRE
• Merits of Questionnaire Method
• it’s very economical.
• It’s a time saving process.
• It covers the research in wide area.
• It’s very suitable for special type of responses.
• It is most reliable in special cases.
• Demerits of Questionnaire Method
• Through this we get only limited responses.
• Lack of personal contact.
• Greater possibility of wrong answers.
• Chances of receiving incomplete response are more.
• Sometimes answers may be illegible.
• It may be useless in many problems.
10. CHECKLIST
• A checklist, is a type of informational job aid used
to reduce failure by compensating for potential
limits of human memory and attention.
• It helps to ensure consisting and completeness in
carrying out a task.
• A basic example is ‘to do list’.
• The main purpose of checklist is to call attention
to various aspects of an object or situation, to see
that nothing of importance is overlooked.
11. USES OF CHECKLISTS
• To collect acts for educational surveys.
• To record behaviour in observational studies.
• To use in educational appraisal, studies – of
school buildings, property, plan, textbooks,
instructional procedures and outcomes etc.
• To rate the personality.
• To know the interest of the subjects also.
Kuder’s interest inventory and Strong’s
Interest Blank are also checklists.
12. OPINIONNAIRE
• “Opinion polling or opinion gauging represents a
single question approach. The answers are usually
in the form of ‘yes’ or ‘no’.
• An undecided category is often included.
Sometimes large number of response alternatives if
provided.”
• Opinionnaire are usually used in researches of the
descriptive type which demands survey of opinions
of the concerned individuals.
• Public opinion research is an example of opinion
survey. Opinion polling enables the researcher to
forecast the coming happenings in successful
manner.
13. CHARACTERISTICS OF OPINIONNAIRE
• The opinionnaire makes use of statements or
questions on different aspects of the problem
under investigation.
• Responses are expected either on three point or
five point scales.
• It uses favourable or unfavourable statements.
• It may be sub-divided into sections.
• The gally poll ballots generally make use of
questions instead of statements.
• The public opinion polls generally rely on
personal contacts rather than mail ballots.
14. RATING SCALE
• Rating scale is one of the enquiry form. Form is a
term applied to expression or judgment
regarding some situation, object or character.
Opinions are usually expressed on a scale of
values.
• Rating techniques are devices by which such
judgments may be quantified. Rating scale is a
very useful device in assessing quality, specially
when quality is difficult to measure objectively.
15. • Rating scales record judgment or opinions and indicates
the degree or amount of different degrees of quality
which are arranged along a line is the scale.
• For example: How good was the performance?
Excellent Very good Good Average Below average Poor Very poor
___|________|________|_______|__________|__________|__________|____
• Ratings can be obtained through one of three major
approaches:
– Paired comparison
– Ranking and
– Rating scales
16. PURPOSE OF RATING SCALE
• Rating scales have been successfully utilized
for measuring the following:
• Teacher Performance/Effectiveness
• Personality, anxiety, stress, emotional
intelligence etc.
• School appraisal including appraisal of
courses, practices and programmes.
• A rating scale includes three factors like:
– The subjects or the phenomena to be rated.
– The continuum along which they will be rated and
– The judges who will do the rating.
17. USE OF RATING SCALE
• Rating scales are used for testing the validity of
many objective instruments like paper pencil
inventories of personality.
• Helpful in writing reports to parents
• Helpful in filling out admission blanks for
colleges
• Helpful in finding out student needs
• Making recommendations to employers.
• Supplementing other sources of understanding
about the child
• Stimulating effect upon the individuals who are
rated.
18. ATTITUDE SCALE
• Attitude scale is a form of appraisal procedure and it is
also one of the enquiry term. Attitude scales have been
designed to measure attitude of a subject of group of
subjects towards issues, institutions and group of peoples.
• The term attitude is defined in various ways, “the
behaviour which we define as attitudinal or attitude is a
certain observable set” organism or relative tendency
preparatory to and indicative of more complete
adjustment.”- L. L. Bernard
• “An attitude may be defined as a learned emotional
response set for or against something.”- Barr David
Johnson
19. ATTITUDE SCALE
• An attitude is spoken of as a tendency of an
individual to read in a certain way towards a
Phenomenon.
• It is what a person feels or believes in.
• It is the inner feeling of an individual.
• It may be positive, negative or neutral.
20. Purpose of Attitude Scale
• In educational research, these scales are used
especially for finding the attitudes of persons
on different issues like:
• Co-education
• Religious education
• Corporal punishment
• Democracy in schools
• Linguistic prejudices
• International co-operation etc.
21. Characteristics of Attitude Scale
• It provides for quantitative measure on a
unidimensional scale of continuum.
• It uses statements from the extreme positive to
extreme negative position.
• It generally uses a five point scale as we have
discussed in rating scale.
• It could be standardized and norms are worked
out.
• It disguises the attitude object rather than directly
asking about the attitude on the subject.
22. Thurstone Techniques of scaled values
• A Thurstone scale has a number of “agree” or “disagree” statements.
• It is a unidimensional scale to measure attitudes towards people.
• Developing the scale is time consuming and relatively complex compared to
other scales.
• Thurstone technique is also known as the technique equal appearing intervals.
• The “Thurstone Scale” they’re usually talking about the method of equal-
appearing intervals.
• It’s called “Equal appearing intervals” because when you choose the items
for your test, you’re picking items equally spaced apart.
The other two variations are:
• The method of successive intervals: this method is more challenging to
implement than equal-appearing intervals.
• The method of paired comparisons: requires twice the judgments than the
equal-appearing intervals method and can quickly become very consuming.
23.
24.
25.
26. LIKERT SCALE
• The Likert scale uses items worded for or against the
proposition, with five point rating response indicating the
strength of the respondent’s approval or disapproval of the
statement.
• A Likert Scale is a type of rating scale used to measure
attitudes or opinions. With this scale, respondents are
asked to rate items on a level of agreement. For example:
• Strongly agree
• Agree
• Neutral
• Disagree
• Strongly disagree
27. • Five to seven items are usually used in the scale.
The scale doesn’t have to state “agree” or
“disagree”; dozens of variations are possible on
themes like agreement, frequency, quality and
importance. For example:
• Agreement: Strongly agree to strongly disagree.
• Frequency: Often to never.
• Quality: Very good to very bad.
• Likelihood: Definitely to never.
• Importance: Very important to unimportant.
28. SCHEDULE
Researcher is using a set of questionnaires for
interview purpose it is known as schedule.
“Schedule is the name usually applied to set
of questions, which are asked and filled by an
interviewer in a face to face situation with
another.” -W.J. Goode & P. K. Hatt
Schedule is a list of questions formulated and
presented with the specific purpose of testing
an assumption or hypothesis.In schedule
method interview occupies a central and plays
a vital role.
29. Important Features of Schedule
• The schedule is presented by the interviewer. The questions are
asked and the answers are noted down by him.
• The list of questions is a mere formal document, it need not be
attractive.
• The schedule can be used in a very narrow sphere of social
research.
• It aids to delimit the scope of the study and to concentrate on the
circumscribed elements essential to the analysis.
• It aims at delimiting the subject.
• In the schedule the list of questions is preplanned and noted down
formally and the interviewer is always armed with the formal
document detailing the questions.
30. OBSERVATION TECHNIQUE
• This is most commonly used technique of
evaluation research.
• It is used for evaluating cognitive and non-
cognitive aspects of a person.
• It is used in evaluation performance, interests,
attitudes, values towards their life problems
and situations.
• It is most useful technique for evaluating the
behaviors of children.
31. • “It is thorough study based on visual
observation. Under this technique group
behaviours and social institutions problems are
evaluated.” -C. Y. Younge
• “Observation employs relatively more visual
and senses than audio and vocal organs.” -C.A.
Mourse
• The cause- effect relationship and study of
events in original form, is known as
observation.
32. Purpose of observation
• To collect data directly.
• To collect substantial amount of data in short
time span.
• To get eye witness first hand data in real like
situation.
• To collect data in a natural setting.
33. Characteristics of observation
• Observation is systematic.
• It is specific.
• It is objective.
• It is quantitative.
• The record of observation should be made
immediately.
• Expert observer should observe the situation.
• It’s result can be checked and verified.
34. Types of Observation
• Structured and Unstructured
• Participant and Non-participant
Steps of Effective Observation:
As a research tool effective observation needs
effective
• Planning
• Execution
• Recording and
• Interpretation
35. Interview
• Interview is a two way method which permits
an exchange of ideas and information.
• “Interviewing is fundamentally a process of
social interaction.” -W. J. Goode & P.K. Hatt
• “The interview constitutes a social situation
between two persons, the psychological
process involved requiring both individuals
mutually respond though the social research
purpose of the interview call for a varied
response from the two parties concerned.”-
Vivien Palma
36. Importance of Interview
Emotions, experiences and feelings.
Sensitive issues.
Privileged information.
It is appropriate when dealing with young children, illiterates,
language difficulty and limited, intelligence.
It supplies the detail and depth needed to ensure that the
questionnaire asks valid questions while preparing questionnaire.
It is a follow up to a questionnaire and complement the
questionnaire.
It can be combined with other tools in order to corroborate facts
using a different approach.
It is one of the normative survey methods, but it is also applied in
historical, experimental, case studies .
37. Types of Interview
• Structured Interview
• Semi-Structured Interview
• Unstructured Interview
• Single Interview
• Group Interview
• Focus Group Interview
38. INVENTORY
• Inventory is a list, record or catalog containing
list of traits,preferences, attitudes, interests or
abilities used to evaluate personal
characteristics or skills.
• The purpose of inventory is to make a list
about a specific trait, activity or programme
and to check to what extent the presence of
that ability types of Inventories like
– Interest Inventory and
– Personality Inventory
39. • Persons differ in their interests, likes and dislikes.
Interest are significant element in the personality
pattern of individuals and play an important role in
their educational and professional careers.
• The tools used for describing and measuring interests
of individuals are the internet inventories or interest
blanks. They are self report instruments in which the
individuals note their own likes and dislikes.
• They are of the nature of standardized interviews in
which the subject gives an introspective report of his
feelings about certain situations and phenomena
which is then interpreted in terms of interest.
• The use of interest inventories is most frequent in the
areas of educational and vocational guidance and case
studies.
40. TOOLS AND TECHNIQUES OF RESEARCH
• Steps of Preparing a Research Tool
• Types of Validity
• Factors affecting validity
• Reliability
• The methods of Estimating Reliability
• Item Analysis
• Steps involved in
• Item – Analysis
41. • The first step in preparing a research tool is to
develop a pool of item.
• item analysis
– Computing difficulty index and
– discrimination index of each item.
– validity of the tool.
42. Validity
• Validity is the most important consideration in the selection and
use of any testing procedures.
• The validity of a test, or of any measuring instrument, depends
upon the degree of exactness with which something is
reproduced/copied or with which it measures what it purports to
measure.
• The validity of a test may be defined as “the accuracy with which
a test measures what it attempts to measure.”
• It is also defined as “The efficiency with which a test measures
what it attempts to measure”.
• Lindquist has defined validity – “As the accuracy with which it
measures that which is intended to as the degree to which it
approaches infallibility in measuring what it purports to measure”.
43. • On the basis of the preceding definitions, it is
seen that
• Validity is a matter of degree. It maybe high,
moderate or low.
• Validity is specific rather than general. A test
may be valid for one specific purpose but not
for another Valid for one specific group of
students but not for another.
44. TYPES OF VALIDITY
• Content Validity
“content validity involves essentially
the systematic examination of the
text content to determine whether it
covers a representative sample of the
bahaviour domain to be measured”.
45. Criterion-related Validity
• This is also known as empirical validity.
• There are two forms of criterion-related validity.
• Predictive Validity
– It refers to how well the scores obtained on the tool
predict future criterion behavior.
• Concurrent Validity
– It refers to how well the scores obtained on the tool
are correlated with present criterion behaviour.
46. Construct Validity
• It is the extent to which the tool measures a
theoretical construct or trait or psychological
variable.
• It refers to how well our tool seems to
measure a hypothesized trait.
47. FACTORS AFFECTING VALIDITY
Unclear Direction
Vocabulary
Difficult Sentence Construction
Poorly Constructed Test Items
Use of Inappropriate Items
Difficulty Level of Items
Influence of Extraneous Factors
Inappropriate Time Limit
Inappropriate Coverage
Inadequate Weightage
• Halo Effect (poor impression about one aspect of the concept)
48. RELIABILITY
A test score is called reliable when we have
reasons of believing the score to be stable and
trustworthy.
“The degree of consistency with which the test
measures what it does measure”
“Reliability means consistency of scores
obtained by same individual when re-
examined with the test on different sets of
equivalent items or under other variable
examining conditions”.
49. A psychological or educational measurement is indirect and is
connected with less precise instruments or traits that are not
always stable. There are many reasons why a pupil’s test score
may vary –
• Trait Instability : The characteristics we measure may change
over a period of time.
• Administrative Error :Any change in direction, timing or
amount of rapport with the test administrative may cause score
variability.
• Scoring Error : Inaccuracies in scoring a test paper will affect the
scores.
• Sampling Error : Any particular questions we ask in order to
infer a person’s knowledge may affect his score.
• Other Factors : Such as health, motivation, degree of fatigue of
the pupil, good or bad luck in guessing may cause score variability
50. THE METHODS OF ESTIMATING RELIABILITY
The four procedures in common use for
computing the reliability coefficient of
a test
–Test – Retest Method.
–The Alternate or Parallel Forms Method.
–The Internal Consistency Reliability.
–The Inter-rater Reliability.
51. Test-Retest (Repetition) Method
(Co-efficient of Stability)
• In test – retest method the single form of a
test is administered twice on the same sample
with a reasonable gap. Thus two set of