This document provides an introduction to quantitative research methods. It discusses key concepts like research methodology, variables, hypotheses, experimental design, and statistical analysis. Specifically, it covers:
- The difference between research methodology and methods, and examples of methodology scopes.
- Key terms like variables, hypotheses, and types of errors in hypothesis testing.
- How to plan, conduct, and analyze experiments, including best-guess experiments and one-factor-at-a-time experiments.
- Basic statistical concepts like mean, variance, normal distribution, and the t-distribution.
- Types of experimental designs like factorial experiments and comparative experiments.
This document discusses qualitative research methods. It defines qualitative research as seeking to understand social phenomena through natural settings and the meanings and experiences of participants. Qualitative research employs descriptive data from real-world contexts and inductive analysis to describe findings from the participants' perspectives. Some key methods are participant observation, interviews, and focus groups. Qualitative research is flexible and asks open-ended questions to get complex responses. It can help interpret quantitative data by explaining real-world situations.
This document provides an overview of qualitative research. It discusses the history and characteristics of qualitative research, including that it seeks to understand perspectives from local populations. The document outlines various qualitative methods like case studies, ethnography, and grounded theory. It also discusses issues in qualitative research such as gaining entry, selecting participants, and enhancing validity. Strategies to reduce bias like triangulation and examining outliers are presented.
Basic research is the search for fundamental knowledge and understanding without a specific commercial application or use in mind. It aims to increase scientific knowledge for its own sake. Some key aspects of basic research include that it is theoretical, builds new knowledge, explores fundamental principles without seeking to solve direct problems, and lays the foundation for applied research. The goal is to expand understanding of phenomena through studying questions like the origins of the universe or composition of subatomic particles, without necessarily creating something new.
The document discusses the research process and provides details about a case study on a department store patronage project. The purpose of the project was to evaluate a major department store, Sears, and its competitors. A survey was conducted through in-home interviews of 271 households to understand customer preferences, perceptions, and factors influencing their choice of department stores. The findings helped identify Sears' weaknesses and develop appropriate marketing strategies to improve its image and sales.
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
The document discusses research methodology and defines research. It provides examples of what constitutes research and what does not. Research is defined as a systematic, logical process that includes understanding the problem, reviewing literature, collecting and analyzing data, drawing conclusions, and generalizing findings. The document also discusses types of research questions, purposes of research, and common challenges in conducting research.
Research Design: Quantitative, Qualitative and Mixed Methods DesignThiyagu K
A Research Design is simply a structural framework of various research methods as well as techniques that are utilized by a researcher. This presentation slides explain the resign design of quantitative, qualitative, and mixed-method design.
This document discusses qualitative research methods. It defines qualitative research as seeking to understand social phenomena through natural settings and the meanings and experiences of participants. Qualitative research employs descriptive data from real-world contexts and inductive analysis to describe findings from the participants' perspectives. Some key methods are participant observation, interviews, and focus groups. Qualitative research is flexible and asks open-ended questions to get complex responses. It can help interpret quantitative data by explaining real-world situations.
This document provides an overview of qualitative research. It discusses the history and characteristics of qualitative research, including that it seeks to understand perspectives from local populations. The document outlines various qualitative methods like case studies, ethnography, and grounded theory. It also discusses issues in qualitative research such as gaining entry, selecting participants, and enhancing validity. Strategies to reduce bias like triangulation and examining outliers are presented.
Basic research is the search for fundamental knowledge and understanding without a specific commercial application or use in mind. It aims to increase scientific knowledge for its own sake. Some key aspects of basic research include that it is theoretical, builds new knowledge, explores fundamental principles without seeking to solve direct problems, and lays the foundation for applied research. The goal is to expand understanding of phenomena through studying questions like the origins of the universe or composition of subatomic particles, without necessarily creating something new.
The document discusses the research process and provides details about a case study on a department store patronage project. The purpose of the project was to evaluate a major department store, Sears, and its competitors. A survey was conducted through in-home interviews of 271 households to understand customer preferences, perceptions, and factors influencing their choice of department stores. The findings helped identify Sears' weaknesses and develop appropriate marketing strategies to improve its image and sales.
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
The document discusses research methodology and defines research. It provides examples of what constitutes research and what does not. Research is defined as a systematic, logical process that includes understanding the problem, reviewing literature, collecting and analyzing data, drawing conclusions, and generalizing findings. The document also discusses types of research questions, purposes of research, and common challenges in conducting research.
Research Design: Quantitative, Qualitative and Mixed Methods DesignThiyagu K
A Research Design is simply a structural framework of various research methods as well as techniques that are utilized by a researcher. This presentation slides explain the resign design of quantitative, qualitative, and mixed-method design.
This document defines and describes different types of research. It discusses research purposes including exploratory, descriptive, and explanatory research. It also covers research uses in basic and applied contexts. The time dimension of cross-sectional and longitudinal research is outlined. Finally, it details quantitative and qualitative data collection techniques.
This document discusses point and interval estimation. It defines an estimator as a function used to infer an unknown population parameter based on sample data. Point estimation provides a single value, while interval estimation provides a range of values with a certain confidence level, such as 95%. Common point estimators include the sample mean and proportion. Interval estimators account for variability in samples and provide more information than point estimators. The document provides examples of how to construct confidence intervals using point estimates, confidence levels, and standard errors or deviations.
This document compares parametric and non-parametric statistical analyses. Parametric analyses make assumptions about the population distribution and variance, are applicable to interval/ratio data, and can be affected by outliers. Non-parametric analyses make no assumptions, can be used with ordinal/nominal data, and are not affected by outliers. The document provides examples of common parametric tests (t-tests, ANOVA) and non-parametric alternatives (Mann-Whitney, Kruskal-Wallis), and guidelines for determining whether a parametric or non-parametric approach is more appropriate.
Topic 1 introduction to quantitative researchAudrey Antee
This document provides an introduction to quantitative research. It defines quantitative research as collecting and analyzing numerical data to explore, describe, explain, or predict trends. Quantitative research aims for objectivity and controls outside factors. It states hypotheses and uses statistics to analyze results. The document outlines reasons for quantitative research such as exploration, description, explanation, prediction, and evaluation. It also describes common types of quantitative research designs and the key components of measurement, sampling, research design, and statistical procedures.
This document provides an introduction to research methodology. It defines research as a systematic technique for thinking that employs specialized tools and procedures to solve problems. The objectives of research are outlined as solving problems scientifically, generating new knowledge or theories, verifying facts, and analyzing events or phenomena. The key steps of research are identified as identifying the problem or area of research, reviewing literature, formulating the problem, deciding on objectives and hypotheses, collecting and analyzing data, interpreting results, and writing the research report. Finally, the document distinguishes between research methods, which are techniques for collecting and analyzing data, and research methodology, which is the overall systematic approach to solving a research problem.
The document outlines key aspects of research methodology including:
1. The objectives of research such as defining problems, formulating hypotheses, collecting and evaluating data, making deductions, and testing conclusions.
2. The different types of research including descriptive, applied, quantitative, conceptual, empirical, qualitative, fundamental, and analytical research.
3. The methods of collecting data including primary methods like questionnaires, observations, interviews, and schedules and secondary methods of collecting published and unpublished data from various sources.
This document provides an overview of key concepts in research methodology, including:
1. It defines research as an organized and systematic process of finding answers to questions through a defined set of steps and procedures.
2. It discusses different types of research including quantitative, qualitative, basic, applied, longitudinal, descriptive, classification, comparative, exploratory, explanatory, causal, theory testing, and theory building research.
3. It also discusses alternatives to research-based knowledge such as relying on authority, tradition, common sense, media, and personal experience.
The document provides an overview of a course on qualitative research methods. It discusses key topics that will be covered in the lectures, including what qualitative research is, different qualitative research strategies and how to implement them, methods for collecting data through observation and interviews, and analyzing qualitative data. The lectures will cover theory, qualitative research strategies and processes, data collection techniques, and critiques of qualitative research approaches.
This document outlines the key elements of quantitative research including hypothesis testing, variables, sampling methods, measurement, validity and reliability, statistical analysis, and causal relationships. Quantitative research aims to systematically test hypotheses through precise standardized measurement and statistical analysis of numerical data. Variables are defined, data is collected from samples using standardized tools and procedures, and results are analyzed using statistical techniques to determine relationships between variables and test hypotheses. The goal is to explain phenomena through objective and replicable quantitative analysis.
This document provides an introduction to probability theory and different probability distributions. It begins with defining probability as a quantitative measure of the likelihood of events occurring. It then covers fundamental probability concepts like mutually exclusive events, additive and multiplicative laws of probability, and independent events. The document also introduces random variables and common probability distributions like the binomial, Poisson, and normal distributions. It provides examples of how each distribution is used and concludes with characteristics of the normal distribution.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
The document provides an overview of quantitative research methodology. It discusses key concepts including population, sampling, samples, and qualitative scales. Specifically, it defines population as any complete group with at least one characteristic in common. It explains that sampling is used to select a subset of a population for a study. The document also outlines different types of measurement scales in quantitative research including nominal, ordinal, interval, and ratio scales.
Research involves defining problems, formulating hypotheses, collecting and evaluating data, reaching conclusions, and testing those conclusions. It is a systematic process that requires accurate data collection and adherence to ethical standards. Research aims to generate new knowledge and insights through logical reasoning using both inductive and deductive methods. The purpose of research can be descriptive, explanatory, or exploratory. There are different types of research methodologies including basic vs applied, descriptive vs exploratory, correlational vs explanatory, qualitative vs quantitative, and conceptual vs empirical research.
Difference between qualitative and quantitative research shaniShani Jyothis
nursing research### quantitative research###qualitative research###difference#### process of research ......
Quantitative Vs qualitative research.......÷######$###@@@@@@@@@@ based on hypothesis, ............., variables analysis,............ interpretation, .............
The document provides an overview of hypothesis testing. It begins by defining a hypothesis test and its purpose of ruling out chance as an explanation for research study results. It then outlines the logic and steps of a hypothesis test: 1) stating hypotheses, 2) setting decision criteria, 3) collecting data, 4) making a decision. Key concepts discussed include type I and type II errors, statistical significance, test statistics like the z-score, and assumptions of hypothesis testing. Factors that can influence a hypothesis test like effect size, sample size, and alpha level are also covered.
This document discusses the development of hypotheses in research. It defines a hypothesis as a tentative answer to a research problem that can be tested. A hypothesis identifies an independent variable that affects a dependent variable. Formulating a hypothesis narrows a research question in a way that can be reasonably studied. Hypotheses vary depending on whether a study takes a qualitative or quantitative approach. Qualitative research uses questions rather than hypotheses, while quantitative research uses testable hypotheses relating independent and dependent variables. The document provides examples of writing quantitative hypotheses with different levels of specificity.
This document provides an overview of statistics and biostatistics. It defines statistics as the collection, analysis, and interpretation of quantitative data. Biostatistics refers to applying statistical methods to biological and medical problems. Descriptive statistics are used to summarize and organize data, while inferential statistics allow generalization from samples to populations. Common statistical measures include the mean, median, and mode for central tendency, and range, standard deviation, and variance for variability. Correlation analysis examines relationships between two variables. The document discusses various data types and measurement scales used in statistics. Overall, it serves as a basic introduction to key statistical concepts for research.
This document discusses guidelines for selecting a research problem and formulating hypotheses. It defines key terms like research problem, assumption, hypothesis, and title. It provides guidelines for writing titles, selecting research topics, formulating general and specific research problems. It also discusses the different forms hypotheses can take and their purposes and functions in research.
Quantitative Methods of Research-Intro to research
Once a researcher has written the research question, the next step is to determine the appropriate research methodology necessary to study the question. The three main types of research design methods are qualitative, quantitative and mixed methods.
Quantitative research involves the systematic collection and analysis of data.
This document provides an overview of inferential statistics. It defines inferential statistics as using samples to draw conclusions about populations and make predictions. It discusses key concepts like hypothesis testing, null and alternative hypotheses, type I and type II errors, significance levels, power, and effect size. Common inferential tests like t-tests, ANOVA, and meta-analyses are also introduced. The document emphasizes that inferential statistics allow researchers to generalize from samples to populations and test hypotheses about relationships between variables.
Research designs for quantitative studies pptNursing Path
The document discusses research designs for quantitative studies. It describes the key components of a research design including the intervention, comparisons, controls for extraneous variables, and timing of data collection. It also outlines different types of research designs such as experimental, quasi-experimental, and non-experimental designs. Experimental designs manipulate an intervention and include a control group, while quasi-experimental designs do not randomly assign subjects. Non-experimental designs do not involve manipulation of an intervention.
This document discusses and provides examples of different research designs, including experimental and quasi-experimental designs. Experimental designs use random assignment and manipulation of an independent variable, with a control group for comparison. Quasi-experimental designs lack random assignment. True experiments use pre-test/post-test designs or post-test only designs. Quasi-experiments include non-equivalent control group designs and time series designs. Pre-experimental designs like one-shot case studies and one group pre-test/post-test designs provide little value due to the lack of control groups. Non-experimental designs do not manipulate variables and can only study correlation, not causation.
This document defines and describes different types of research. It discusses research purposes including exploratory, descriptive, and explanatory research. It also covers research uses in basic and applied contexts. The time dimension of cross-sectional and longitudinal research is outlined. Finally, it details quantitative and qualitative data collection techniques.
This document discusses point and interval estimation. It defines an estimator as a function used to infer an unknown population parameter based on sample data. Point estimation provides a single value, while interval estimation provides a range of values with a certain confidence level, such as 95%. Common point estimators include the sample mean and proportion. Interval estimators account for variability in samples and provide more information than point estimators. The document provides examples of how to construct confidence intervals using point estimates, confidence levels, and standard errors or deviations.
This document compares parametric and non-parametric statistical analyses. Parametric analyses make assumptions about the population distribution and variance, are applicable to interval/ratio data, and can be affected by outliers. Non-parametric analyses make no assumptions, can be used with ordinal/nominal data, and are not affected by outliers. The document provides examples of common parametric tests (t-tests, ANOVA) and non-parametric alternatives (Mann-Whitney, Kruskal-Wallis), and guidelines for determining whether a parametric or non-parametric approach is more appropriate.
Topic 1 introduction to quantitative researchAudrey Antee
This document provides an introduction to quantitative research. It defines quantitative research as collecting and analyzing numerical data to explore, describe, explain, or predict trends. Quantitative research aims for objectivity and controls outside factors. It states hypotheses and uses statistics to analyze results. The document outlines reasons for quantitative research such as exploration, description, explanation, prediction, and evaluation. It also describes common types of quantitative research designs and the key components of measurement, sampling, research design, and statistical procedures.
This document provides an introduction to research methodology. It defines research as a systematic technique for thinking that employs specialized tools and procedures to solve problems. The objectives of research are outlined as solving problems scientifically, generating new knowledge or theories, verifying facts, and analyzing events or phenomena. The key steps of research are identified as identifying the problem or area of research, reviewing literature, formulating the problem, deciding on objectives and hypotheses, collecting and analyzing data, interpreting results, and writing the research report. Finally, the document distinguishes between research methods, which are techniques for collecting and analyzing data, and research methodology, which is the overall systematic approach to solving a research problem.
The document outlines key aspects of research methodology including:
1. The objectives of research such as defining problems, formulating hypotheses, collecting and evaluating data, making deductions, and testing conclusions.
2. The different types of research including descriptive, applied, quantitative, conceptual, empirical, qualitative, fundamental, and analytical research.
3. The methods of collecting data including primary methods like questionnaires, observations, interviews, and schedules and secondary methods of collecting published and unpublished data from various sources.
This document provides an overview of key concepts in research methodology, including:
1. It defines research as an organized and systematic process of finding answers to questions through a defined set of steps and procedures.
2. It discusses different types of research including quantitative, qualitative, basic, applied, longitudinal, descriptive, classification, comparative, exploratory, explanatory, causal, theory testing, and theory building research.
3. It also discusses alternatives to research-based knowledge such as relying on authority, tradition, common sense, media, and personal experience.
The document provides an overview of a course on qualitative research methods. It discusses key topics that will be covered in the lectures, including what qualitative research is, different qualitative research strategies and how to implement them, methods for collecting data through observation and interviews, and analyzing qualitative data. The lectures will cover theory, qualitative research strategies and processes, data collection techniques, and critiques of qualitative research approaches.
This document outlines the key elements of quantitative research including hypothesis testing, variables, sampling methods, measurement, validity and reliability, statistical analysis, and causal relationships. Quantitative research aims to systematically test hypotheses through precise standardized measurement and statistical analysis of numerical data. Variables are defined, data is collected from samples using standardized tools and procedures, and results are analyzed using statistical techniques to determine relationships between variables and test hypotheses. The goal is to explain phenomena through objective and replicable quantitative analysis.
This document provides an introduction to probability theory and different probability distributions. It begins with defining probability as a quantitative measure of the likelihood of events occurring. It then covers fundamental probability concepts like mutually exclusive events, additive and multiplicative laws of probability, and independent events. The document also introduces random variables and common probability distributions like the binomial, Poisson, and normal distributions. It provides examples of how each distribution is used and concludes with characteristics of the normal distribution.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
The document provides an overview of quantitative research methodology. It discusses key concepts including population, sampling, samples, and qualitative scales. Specifically, it defines population as any complete group with at least one characteristic in common. It explains that sampling is used to select a subset of a population for a study. The document also outlines different types of measurement scales in quantitative research including nominal, ordinal, interval, and ratio scales.
Research involves defining problems, formulating hypotheses, collecting and evaluating data, reaching conclusions, and testing those conclusions. It is a systematic process that requires accurate data collection and adherence to ethical standards. Research aims to generate new knowledge and insights through logical reasoning using both inductive and deductive methods. The purpose of research can be descriptive, explanatory, or exploratory. There are different types of research methodologies including basic vs applied, descriptive vs exploratory, correlational vs explanatory, qualitative vs quantitative, and conceptual vs empirical research.
Difference between qualitative and quantitative research shaniShani Jyothis
nursing research### quantitative research###qualitative research###difference#### process of research ......
Quantitative Vs qualitative research.......÷######$###@@@@@@@@@@ based on hypothesis, ............., variables analysis,............ interpretation, .............
The document provides an overview of hypothesis testing. It begins by defining a hypothesis test and its purpose of ruling out chance as an explanation for research study results. It then outlines the logic and steps of a hypothesis test: 1) stating hypotheses, 2) setting decision criteria, 3) collecting data, 4) making a decision. Key concepts discussed include type I and type II errors, statistical significance, test statistics like the z-score, and assumptions of hypothesis testing. Factors that can influence a hypothesis test like effect size, sample size, and alpha level are also covered.
This document discusses the development of hypotheses in research. It defines a hypothesis as a tentative answer to a research problem that can be tested. A hypothesis identifies an independent variable that affects a dependent variable. Formulating a hypothesis narrows a research question in a way that can be reasonably studied. Hypotheses vary depending on whether a study takes a qualitative or quantitative approach. Qualitative research uses questions rather than hypotheses, while quantitative research uses testable hypotheses relating independent and dependent variables. The document provides examples of writing quantitative hypotheses with different levels of specificity.
This document provides an overview of statistics and biostatistics. It defines statistics as the collection, analysis, and interpretation of quantitative data. Biostatistics refers to applying statistical methods to biological and medical problems. Descriptive statistics are used to summarize and organize data, while inferential statistics allow generalization from samples to populations. Common statistical measures include the mean, median, and mode for central tendency, and range, standard deviation, and variance for variability. Correlation analysis examines relationships between two variables. The document discusses various data types and measurement scales used in statistics. Overall, it serves as a basic introduction to key statistical concepts for research.
This document discusses guidelines for selecting a research problem and formulating hypotheses. It defines key terms like research problem, assumption, hypothesis, and title. It provides guidelines for writing titles, selecting research topics, formulating general and specific research problems. It also discusses the different forms hypotheses can take and their purposes and functions in research.
Quantitative Methods of Research-Intro to research
Once a researcher has written the research question, the next step is to determine the appropriate research methodology necessary to study the question. The three main types of research design methods are qualitative, quantitative and mixed methods.
Quantitative research involves the systematic collection and analysis of data.
This document provides an overview of inferential statistics. It defines inferential statistics as using samples to draw conclusions about populations and make predictions. It discusses key concepts like hypothesis testing, null and alternative hypotheses, type I and type II errors, significance levels, power, and effect size. Common inferential tests like t-tests, ANOVA, and meta-analyses are also introduced. The document emphasizes that inferential statistics allow researchers to generalize from samples to populations and test hypotheses about relationships between variables.
Research designs for quantitative studies pptNursing Path
The document discusses research designs for quantitative studies. It describes the key components of a research design including the intervention, comparisons, controls for extraneous variables, and timing of data collection. It also outlines different types of research designs such as experimental, quasi-experimental, and non-experimental designs. Experimental designs manipulate an intervention and include a control group, while quasi-experimental designs do not randomly assign subjects. Non-experimental designs do not involve manipulation of an intervention.
This document discusses and provides examples of different research designs, including experimental and quasi-experimental designs. Experimental designs use random assignment and manipulation of an independent variable, with a control group for comparison. Quasi-experimental designs lack random assignment. True experiments use pre-test/post-test designs or post-test only designs. Quasi-experiments include non-equivalent control group designs and time series designs. Pre-experimental designs like one-shot case studies and one group pre-test/post-test designs provide little value due to the lack of control groups. Non-experimental designs do not manipulate variables and can only study correlation, not causation.
Part of a course I run introducing quantitative methods. One of the slideshows on my site www.kevinmorrell.org.uk please reference the site if you use any of it - hope it is useful.
This presentation is about Quantitative Research, its types and important aspects including advantages and disadvantages, characteristics and definitions.
This thesis analyzes English inquiry letters for online shopping from the perspective of systemic functional grammar. It begins with an introduction that outlines the study's rationale, aims, scope, methods, and design. The development section then reviews the theoretical background of systemic functional grammar and examines inquiry letters in terms of experiential, interpersonal, and textual meaning. Specifically, it analyzes transitivity, mood, and thematic structure. The study concludes by discussing implications for writing inquiry letters and suggesting exercises to practice this skill. It also acknowledges limitations and provides suggestions for further related research.
This document summarizes research on online shopping conducted by a group of students. The objectives were to determine the most appropriate concept, price range, promotion media, product usage among targets, and estimated market size. PayPal was analyzed as a payment option allowing secure online payments without credit card exposure. The target markets identified were reluctant shoppers needing security, frequent shoppers wanting ease of use, and online merchants. A survey methodology was used with a non-probability sample to analyze shopping habits and payment concerns. Results were analyzed and recommendations made. The estimated Egyptian online shopping market size was 5 million people with purchasing power out of the country's 25 million internet users.
This document outlines potential topics areas for BSCS thesis in 2015, including mobile computing systems, embedded systems, and intelligent systems. It provides examples of each topic area and considerations for thesis proposals. Mobile computing allows using technology in remote environments through portable, wireless devices like smartphones. Embedded systems control electromechanical equipment as an integral part of a larger system. Intelligent systems can gather and analyze data to adapt and communicate through connectivity in security-focused systems like traffic lights and smart meters.
Marketing online shopping - consumer’s perception on online shoppingRadhe Jha
This document appears to be a dissertation report on consumer perceptions of online shopping in India. It includes an introduction that provides background on the growth of online shopping in India. The introduction discusses how online activities like shopping, banking, employment searches, and travel booking are growing significantly. It also outlines key factors driving online adoption in India like increasing internet penetration, lower costs, and changing consumer attitudes. The dissertation will examine factors that influence Indian consumers' online shopping perceptions and behaviors through a survey analysis.
A study on factors influencing consumer’s online shoppingAshok kumar T
This study examines factors that influence consumers' online shopping behavior in Coimbatore City, India. The researchers conducted a survey of 300 online shoppers. They found that website design, security, reliability, and customer service were the main factors affecting shopping behavior. Website security and reliable delivery were especially important to consumers. The study also found that these four factors had a significant impact on consumers' purchase decisions. Overall, the analysis showed that addressing these factors is important for online retailers to attract and satisfy customers.
Online shopping is getting popular in Vietnam. The most popular segment is fashion area. Q&Me Vietnam market research has conducted the survey among Vietnam online shoppers.
The internet is being developed rapidly since last two decades, and with relevant digital economy that is driven by information technology also being developed worldwide. After a long term development of internet, which rapidly increased web users and highly speed internet connection, and some new technology also have been developed and used for web developing, those lead to firms can promote and enhance images of product and services through web site. Therefore, detailed product information and improved service attracts more and more people changed their consumer behaviour from the traditional mode to more rely on the internet shopping. On the other hand, more companies have realized that the consumer behaviour transformation is unavoidable trend, and thus change their marketing strategy. As the recent researches have indicated that, the internet shopping particularly in business to consumer (B2C) has risen and online shopping become more popular to many people. According to the report, The Emerging Digital Economy II, published by the US Department of Commerce, in some companies, the weight of e-commerce in total sales is quite high. For instance, the Dell computer company have reached 18 million dollars sales through the internet during the first quarter of 1999. As a result, about 30% of its 5.5 billion dollars total sales were achieved through the internet (Moon, 2004). Therefore, to understand internet shopping and its impact on consumer behaviour could help companies making use of it as a form of doing e-business.
There are many reasons for such a rapid developing of internet shopping, which mainly due to the benefits that internet provides. First of all, the internet offers different kind of convenience to consumers. Obviously, consumers do not need go out looking for product information as the internet can help them to search from online sites, and it also helps evaluate between each sites to get the cheapest price for purchase. Furthermore, the internet can enhance consumer use product more efficiently and effectively than other channels to satisfy their needs. Through the different search engines, consumers save time to access to the consumption related information, and which information with mixture of images, sound, and very detailed text description to help consumer learning and choosing the most suitable product (Moon, 2004). However, internet shopping has potential risks for the customers, such as payment safety, and after service. Due to the internet technology developed, internet payment recently becomes prevalent way for purchasing goods from the internet. Internet payment increase consumptive efficiency, at the same time, as its virtual property reduced internet security. After service is another way to stop customer shopping online. It is not like traditional retail, customer has risk that some after service should face to face serve, and especially in some complicated goods.
This document discusses quantitative research methods. It defines quantitative research as using numerical data to obtain objective information. The goals of quantitative research are to generalize findings, be objective, and test theories. The quantitative research process involves 10 steps: developing a theory and hypotheses, research design, defining concepts and variables, selecting respondents, data collection, data preparation, analysis, conclusions, and reporting. Several data collection methods are also discussed, including surveys, structured interviews, structured observations, and questionnaires.
This document provides an overview of research methodology. It defines research and discusses its key characteristics including being systematic, empirical, and objective. It also covers different types of research such as pure vs applied research, and quantitative vs qualitative approaches. Additionally, it outlines the typical steps in the research process from formulating the problem to analyzing data and reporting results. The document serves as a useful introduction to research methodology concepts.
This document outlines and describes the major types of research, including descriptive research, analytical research, applied research, basic research, quantitative research, qualitative research, conceptual research, and non-scientific methods. Descriptive research involves surveys and fact-finding to describe the current state of affairs, while analytical research involves in-depth study and evaluation of available information to explain complex phenomena. Applied research aims to solve immediate problems, and basic research focuses on developing general theories. Quantitative research relies on measurement, while qualitative research examines qualities and characteristics.
Qualitative research focuses on words rather than numbers, generates theories rather than generalizing, and aims to understand participant views without claiming to generalize. Qualitative researchers are influenced by interpretivism and seek to understand social life through the eyes of participants by emphasizing context and flexibility over rigid structures. The qualitative research process involves generating questions, selecting sites and subjects, collecting and analyzing data, developing concepts and theories, and writing conclusions. Reliability and validity are ensured through methods like member checking and triangulation. Qualitative sampling uses non-probability methods like convenience sampling. Data collection methods include interviews, focus groups, document analysis, and observation.
This document provides a research report on online retailing of fashion clothing in India. It begins by introducing online retailing and growth trends in India. The research problem is defined as studying drivers of customers shifting from brick-and-mortar to online buying of fashion clothing. Research methods include literature review, surveys, interviews. Hypotheses are that upper classes shop online more and those in one-tier cities shop online more. Findings show hypotheses are false as lower income groups and those in two-tier cities dominate online shopping. Suggestions include targeting younger male audiences, improving websites, and offering competitive prices and payment options.
The document provides information about analyzing and interpreting data through various graphs and calculations. It defines terms like mean, median, mode, and range. It explains how to calculate the mean, median, mode, and range of a data set. It also defines and compares different types of graphs like bar graphs, circle graphs, line graphs, line plot graphs, pictographs, and Venn diagrams. Finally, it provides some practice websites for interpreting data.
This PowerPoint was presented at the 2012 Summer School on Fashion Management at the University of Antwerp. The lecture explains the concept of business models from a theoretical point of view, and illustrates this with an example from the fashion industry.
Here are my responses to the guide questions:
1. I decided to teach in SHS because I wanted to help guide students in their transition to college and career. I find it rewarding to support students' personal and academic growth during this important stage of their lives.
2. Two of the most significant experiences I've had teaching Research involve seeing students get excited about their topics and taking ownership of their work. It's amazing to see their eyes light up when they discover something interesting during the research process. I also appreciate witnessing students' confidence grow as they learn to independently plan and conduct research. These experiences are meaningful because they show the positive impact of research skills on student learning and development.
3. One of my most
Researchers use several tools and procedures for analyzing quantitative data obtained from different types of experimental designs. Different designs call for different methods of analysis. This presentation focuses on:
T-test
Analysis of variance (F-test), and
Chi-square test
Selection of appropriate data analysis techniqueRajaKrishnan M
- The document discusses choosing the right statistical method for data analysis, which depends on factors like the number and measurement level of variables, the distribution of variables, the dependence/independence structure, the nature of the hypotheses, and sample size.
- It presents flowcharts for choosing a statistical method based on whether the hypothesis involves one variable (univariate), two variables (bivariate), or more than two variables (multivariate).
- For univariate data, descriptive statistics or a one-sample t-test can be used depending on whether description or inference is the goal; for bivariate data, the choice depends on the nature of the hypothesis (difference or association) and the level of measurement (parametric or nonparame
Here are the steps to solve this hypothesis testing problem:
1. State the null and alternative hypotheses:
H0: There is no significant difference between the means under stress and no stress conditions.
H1: There is a significant difference between the means under stress and no stress conditions.
2. Choose the level of significance: Given as α = 0.01
3. Select the appropriate statistical test: Since this involves comparing the means of two independent groups, use a two-sample t-test.
4. Compute the test statistic and p-value: Follow the t-test formula and calculation.
5. Make a decision: Reject H0 if p-value < α, fail to reject H0 if
This document discusses different statistical tests used to analyze experimental research data, including the t-test, analysis of variance (ANOVA), and chi-square test. It provides examples of how to apply each test and interpret the results. The t-test is used to compare the means of two groups, ANOVA is used for comparing more than two groups, and chi-square is used to analyze relationships between categorical variables. Computer programs like SPSS can perform these statistical analyses to help researchers evaluate experimental data.
This document provides an outline and overview of Chapter 9 from a statistics textbook. The chapter covers hypothesis testing for single populations, including:
- Establishing null and alternative hypotheses
- Understanding Type I and Type II errors
- Testing hypotheses about single population means when the standard deviation is known or unknown
- Testing hypotheses about single population proportions and variances
- Solving for Type II errors
The chapter teaches students how to implement the HTAB (Hypothesis, Test Statistic, Accept/Reject regions, Boundaries, Conclusion) system to scientifically test hypotheses using statistical techniques like z-tests and t-tests. Key concepts covered include one-tailed and two-tailed tests, critical values, p
This document provides an overview of quantitative data analysis methods for medical education research. It discusses summary measures, hypothesis testing, statistical methodologies, sample size determination, and additional resources for statistical support. Key points covered include choosing appropriate statistical tests based on study design, translating research questions into testable hypotheses, interpreting p-values and making conclusions, and factors that influence required sample size such as effect size and variability.
pratik meshram-Unit 4 contemporary marketing research full notes pune univers...Pratik Meshram
Unit 4 discusses data analysis and hypothesis testing. It covers topics such as data analysis, hypothesis, conjoint analysis, and factor analysis. Data analysis involves collecting, processing, analyzing, and interpreting data. A hypothesis is a proposition that can be tested. The key steps in hypothesis testing are to formulate hypotheses, select a significance level, choose a test criterion, and make a decision to accept or reject the null hypothesis. Common hypothesis tests include z-tests, t-tests, F-tests, chi-square tests, and ANOVA.
The document discusses hypothesis testing and statistical analysis techniques. It covers univariate, bivariate, and multivariate statistical analysis, which involve one, two, or three or more variables, respectively. The key steps of hypothesis testing are outlined, including deriving a null hypothesis from the research objectives, obtaining and measuring a sample, comparing the sample value to the hypothesis, and determining whether to support or not support the hypothesis based on consistency. Type I and Type II errors in hypothesis testing are defined. Common statistical tests like chi-square, t-tests, ANOVA, and correlation are introduced along with concepts like significance levels, p-values, and degrees of freedom.
This document outlines the steps in hypothesis testing using the traditional method. It defines key terms like the null hypothesis (H0), alternative hypothesis (H1), type I and type II errors. The three main types of hypothesis tests are described - two-tailed, right-tailed, and left-tailed. Examples are provided to demonstrate how to state the null and alternative hypotheses for different conjectures. The document also explains the four possible outcomes of a hypothesis test and compares the process to a jury trial.
This document discusses data analysis and hypothesis testing. It covers:
1. The importance of properly analyzing collected data, including preparing analysis tables and frameworks before data collection.
2. The meaning and definition of a hypothesis, including the concept of a null hypothesis.
3. The key steps in hypothesis testing: formulating hypotheses, selecting a significance level, choosing a test criterion, and making a decision.
4. Types of errors in hypothesis testing, including Type I and Type II errors.
5. Examples of common parametric tests like z-tests, t-tests, and F-tests, which assume parameters exist about the population.
6. The concept of non-parametric
This document provides an overview of research methods and statistical concepts. It discusses research design types including descriptive, historical, and experimental. Experimental design can be true experiments or quasi-experiments. It also discusses quantitative and qualitative research approaches and mixed methods. Key statistical concepts are defined, such as population, sample, probability and non-probability sampling, and levels of measurement. Common statistical tests are introduced along with important assumptions. The document provides guidance on how to measure learning experimentally using different research designs. It also discusses how to determine appropriate sample sizes and select statistical analyses based on the research questions.
This document provides an overview of a presentation on statistical hypothesis testing using the t-test. It discusses what a t-test is, how to perform a t-test, and provides an example of a t-test comparing spelling test scores of two groups that received different teaching strategies. The document outlines the six steps for conducting statistical hypothesis testing using a t-test: 1) stating the hypotheses, 2) choosing the significance level, 3) determining the critical values, 4) calculating the test statistic, 5) comparing the test statistic to the critical values, and 6) writing a conclusion.
The document discusses experimental and quasi-experimental research methods. It defines key characteristics of experimental research such as random assignment, control and intervention groups, and pre- and post-testing. Issues of internal and external validity are examined. Common statistical analyses for experimental designs are introduced, including t-tests, ANOVA, and multiple regression. Examples of experimental designs like single-group, non-equivalent groups, interrupted time series, and factorial designs are also summarized.
This document introduces parametric tests and provides information about the t-test. It defines parametric tests as those applied to normally distributed data measured on interval or ratio scales. Parametric tests make inferences about the parameters of the probability distribution from which the sample data were drawn. Examples of common parametric tests are provided, including the t-test. The t-test is used to compare two means from independent samples or correlated samples. Steps for conducting a t-test are outlined, including calculating the t-statistic and making decisions based on critical t-values. Two examples of using a t-test on experimental data are shown.
This part of the thesis describes the methodology section which provides details of the research activities, data collection strategies, and administration of questionnaires and interviews to achieve the study objectives and address the problem. It discusses preparing and testing questionnaires, identifying persons responsible for data collection, and approaches for administering questionnaires and conducting interviews.
The document discusses various topics related to research methodology including definitions of research, types of research, research methods, sampling techniques, data collection methods, and experimental research. Some key points:
- Research is defined as a systematic effort to gain new knowledge through objective and scientific methods. It involves identifying a problem, formulating a hypothesis, collecting and analyzing data, and reporting findings.
- There are different types of research including descriptive, analytical, applied, fundamental, quantitative, qualitative, and more. Research methods can be quantitative, qualitative, experimental, case study, etc.
- Important steps in research include formulating the problem, literature review, developing hypotheses, research design, sampling, data collection, analysis, testing hypotheses,
The document discusses hypothesis testing and the scientific research process. It begins by defining a hypothesis as a tentative statement about the relationship between two or more variables that can be tested. It then outlines the typical steps in the scientific research process, which includes forming a question, background research, creating a hypothesis, experiment design, data collection, analysis, conclusions, and communicating results. Finally, it provides details on characteristics of a strong hypothesis, the process of hypothesis testing through statistical analysis, and setting up an experiment for hypothesis testing, including defining hypotheses, significance levels, sample size determination, and calculating standard deviation.
Hypothesis testing and estimation are used to reach conclusions about a population by examining a sample of that population.
Hypothesis testing is widely used in medicine, dentistry, health care, biology and other fields as a means to draw conclusions about the nature of populations
This document provides an overview of non-parametric tests presented by Ms. Prajakta Sawant. It discusses non-parametric tests as distribution-free statistical tests that do not require assumptions about the underlying population distribution. Common non-parametric tests described include the Wilcoxon rank-sum test, Kruskal-Wallis test, Spearman's rank correlation coefficient, and the chi-square test. Examples are provided for each test to illustrate their application and interpretation.
Similar to Introduction to Quantitative Research Methods (20)
The document discusses research methods and the research process, including defining what research is, the first classic researchers such as Socrates and Aristotle, the typical life cycle of research from developing ideas to analyzing them, a historical case study of the shifting models of the universe from Ptolemy to Newton, and how to develop a good research question.
This document provides an overview of artificial neural networks. It discusses biological neurons and how they are modeled in computational systems. The McCulloch-Pitts neuron model is introduced as a basic model of artificial neurons that uses threshold logic. Network architectures including single and multi-layer feedforward and recurrent networks are described. Different learning processes for neural networks including supervised and unsupervised learning are summarized. The perceptron model is explained as a single-layer classifier. Multilayer perceptrons are introduced to address non-linear problems using backpropagation for supervised learning.
This document provides an overview of artificial intelligence (AI) including its history and key concepts. It discusses how philosophers like Hobbes and mathematicians like Boole laid the foundations for AI by exploring symbolic logic and operations. Landmark developments included Babbage's analytical machine, Turing's universal machine concept, and McCarthy coining the term "artificial intelligence". The document also outlines branches of AI like natural language processing, computer vision, robotics, problem solving, learning, and expert systems. It provides examples of applications and concludes by noting progress made in creating human-like artificial creatures remains limited.
1. The document provides an introduction to expert systems, including their basics, applications, development process, structure, and inferencing methods.
2. Expert systems use both facts and heuristics to solve complex decision problems based on knowledge acquired from experts in specific domains such as medical diagnosis.
3. The key components of an expert system are the knowledge base containing rules and data, the working memory containing task-specific data, and the inference engine which applies rules to data to arrive at solutions. Forward and backward reasoning are common inferencing methods.
Evolutionary algorithms are optimization techniques inspired by biological evolution. They work by generating random solutions and using mechanisms like selection, crossover and mutation to iteratively improve the population's fitness. Genetic algorithms are a popular type of evolutionary algorithm that mimics Darwinian evolution by maintaining a population of candidate solutions and using techniques like crossover and mutation to produce new solutions from existing ones. An example demonstrates how a genetic algorithm can be applied to optimize a function by evolving a population of potential solutions over generations.
This document summarizes a presentation on active noise control fundamentals and recent advances. It discusses the history of active noise control from the 1930s to present day, covering milestones such as the first patent in 1936, emerging analog devices in the 1950s, the introduction of adaptive noise cancellation using digital signal processing in the 1970s, and the development of digital active noise control systems and applications from the 1980s onward. It also outlines topics to be covered on active noise control fundamentals and recent advances in two parts of the presentation.
This document presents a remote FxLMS algorithm for active noise control in remote locations. It introduces the concept of active noise control to cancel noise using secondary sources. A novel model is proposed to analyze active noise control systems in the acoustic domain. Based on this model, a methodology is developed for active noise control at remote locations using a remote FxLMS adaptive algorithm. Results show the remote algorithm can effectively control noise at a distant point. Future work aims to target 3D zones of quiet in remote locations.
Adaptive Active Control of Sound in Smart Rooms (2014)Iman Ardekani
This presentation discusses adaptive active noise control in smart rooms. It introduces the concept of using active noise control to reduce noise in smart rooms like living rooms, office rooms, hospital rooms, and classrooms. It then discusses some of the challenges of active noise control stability in real-life applications and smart rooms. The presentation proposes using root locus analysis and introducing a compensator to improve stability. It also explores using remote acoustic sensing to replace the error microphone and allow more effective use of space in the quiet zone.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
4. From Research Methodology to Hypothesis
Research Methodology
Method 1 Method 2
Research Questions……..…. Research Questions……… Research Questions
5. An example for Research Methodology
Each step may involve several research methods
From Research Methodology to Hypothesis
Step 1: Planning
and defining RQ
Step 2: Literature
Review
Step 3: Survey
Development
Step 5: Data
Analysis
Step 4: Data
Collection
Step 6:
Documentation
Methodology
6. Methodology Scopes (included but not limited to)
1. Descriptive research (aka statistical research): to describes data
and characteristics about the variables of a phenomenon.
2. Correlational research: to explore the statistical relationship
between variables.
3. Experimental research: to explore the causal effective
relationships between the variables in controlled environments.
4. Ex post facto research: to explore the causal effective
relationships between the variables when environment is not
under control.
5. Survey research: to assess thoughts and opinions.
From Research Methodology to Hypothesis
7. What is a variable?
Something that changes, takes different values, and that we
can alter or measure. It has two types:
1. Independent Variables (e.g. the aspect of environment)
2. Dependent Variables (e.g. behaviours of systems)
Example: when studying the effect of distance on the
transmission delay in radio telecommunication, the distance is
an independent variable and the delay is a dependent variable.
From Research Methodology to Hypothesis
8. From Research Methodology to Hypothesis
Difference Between Research Methods and Research Methodology
Research Methodology Research Methods
explains the methods by which you
may proceed with your research.
the methods by which you conduct
research into a subject or a topic.
involves the learning of the various
techniques that can be used in
conducting research, tests,
experiments, surveys and etc.
involve conduct of experiments,
tests, surveys and etc.
aims at the employment of the
correct procedures to find out
solutions.
aim at finding solutions to research
problems.
paves the way for research methods
to be conducted properly.
9. Classifications of Research Methods
1. Qualitative Research Methods
2. Quantitative Research Methods
From Research Methodology to Hypothesis
10. Quantitative Research Methods
Examples are survey methods, laboratory
experiments, formal methods (e.g. econometrics),
numerical methods and mathematical modeling.
Qualitative methods produce information only on the
particular cases studied, and any more general
conclusions are only hypotheses. Quantitative
methods can be used to verify, which of such
hypotheses are true.
From Research Methodology to Hypothesis
11. A number of descriptive/relational studies show that
people have difficulty navigating websites when the
navigational bars are inconsistent in their locations
through a Website.
Inductive Reasoning?
Deductive Reasoning?
Variables?
Hypothesis?
From Research Methodology to Hypothesis
12. Inductive Reasoning?
People need consistency in navigational mechanisms.
Deductive Reasoning?
People will have more difficulties with websites if the navigation is
inconsistent.
Independent variables?
Navigational Consistency: defined as characteristics of navigational bars
and their elements such as location, font, colour, etc.
Dependent variables?
Difficulty: defined as the efficiency of navigation by user. For example, time
taken to complete tasks, errors made, usage ratings.
From Research Methodology to Hypothesis
13. Hypothesis?
People will take longer to complete tasks, make more
errors, and give lower ratings of acceptability on a website
with a navigation bar that varies in its location from screen
to screen in comparison to one in which the navigation
bar appears in a consistent position on all screens.
How to test this hypothesis?
By using experiments and based on hypothesis testing
approaches!
From Research Methodology to Hypothesis
15. What is a Hypothesis:
A statement that specifically explain the
relationship between the variables of a system or
process.
It is a proposed explanation.
It should be tested. How?
From Hypothesis to Experiment
16. Statistical Hypotheses – Definition
A statement either about the parameters of a probability
distribution or the variables of a system.
This may be stated formally as
H0: A = B
H1: A ≠ B
Where A and B are statistics of two experiments.
From Hypothesis to Experiment
Null Hypothesis
Alternative
Hypothesis
17. Statistical Hypotheses – Notes
Note 1: The alternative hypothesis specified here is called
a two-sided alternative hypothesis because it would be
true if A>B or if A<B.
***
Note 2: A and B are two statistics (random variable) so for
examining A = B or A ≠ B, statistical distribution of them
should be considered.
***
From Hypothesis to Experiment
18. Statistical Hypotheses Testing
Testing a hypothesis involves in
1. taking a random sample
2. computing an appropriate test statistic, and then
3. rejecting or failing to reject the null hypothesis H0.
Part of this procedure is specifying the set of values for
the test statistic that leads to rejection of H0. This set of
values is called the critical region or rejection region for
the test.
From Hypothesis to Experiment
19. Errors in Hypothesis Testing
Two kinds of errors may be committed when testing
hypotheses:
Type 1: the null hypothesis is rejected but it is true.
= P(type 1 error) = P(reject H0 | H0 is true)
Type 2: the null hypothesis is not rejected but it is false.
= P(type 2 error) = P(fail to reject H0 | H0 is false)
Power of the test is defined as
Power = 1 - =P(reject H0 | H0 is false)
From Hypothesis to Experiment
20. Significance Level
is called the significance level.
The objective of a statistical test is to achieve low
significance level while still maintaining high test
power.
From Hypothesis to Experiment
21. Statistically Significant Hypotheses
The hypothesis verified using the statistical hypothesis testing
method is called statistically significant since it is unlikely to be
wrong in a probability sense.
From Hypothesis to Experiment
22. Experiment – Definition
An experiment is a test or a series of tests.
The hypothesis can describe the relationship between x, z and y
variables and an experiment can verify this hypothesis.
From Hypothesis to Experiment
23. How to plan, conduct and analyze an experiment?
Step 1 - Recognition of and statement of the problem
Step 2 - Selection of the response variable
Step 3 - Choice of factors, levels, and range
Step 4 - Choice of experimental design
Step 5 - Performing the experiment
Step 6 - Statistical analysis of the data
Step 7 - Conclusions and recommendations:
From Hypothesis to Experiment
24. Lets continue with the following example:
I really like to play golf. Unfortunately, I do not enjoy practicing, so I am
always looking for a simpler solution to lowering my score. Some of the
factors that I think may be important, or that may influence my golf score,
are as follows:
1. The type of driver used (oversized or regular sized)
2. The type of ball used (balata or three piece)
3. Walking and carrying the golf clubs or riding in a golf cart
4. Drinking water or drinking beer while playing
From Hypothesis to Experiment
25. Best-guess Experiments
Change one or several factors for the next round, based on the
outcome of the current test, in order to improve the output.
Example:
Round 1: oversized driver, balata ball, walk, and water:
Score 87: Noticed several wayward shots with the big driver
Round 2: regular-sized driver, balata ball, walk, and water:
Score 80: Notice that people will easily get tired by walking
Round 3: regular-sized driver, balata ball, golf cart and water
Score 78: Notice that …
From Hypothesis to Experiment
26. One-factor-at-a-time Experiments
Select a starting point (a default setting for each factor)
Example:
Starting point: oversized driver, balata ball, walking, and
drinking water and successively varying each factor over its
range with other factors held constant at the baseline level.
From Hypothesis to Experiment
27. Example for one-factor-at-a-time approach:
Conclusion:
regular-sized driver, balata ball, riding, and drinking water
is the optimal combination.
From Hypothesis to Experiment
28. Problem with one-factor-at-a-time approach
Interactions between factors are very common. If they occur, the one-
factor-at-a-time approach will usually produce poor results.
For solving this problem, factorial experiment design can be used.
From Hypothesis to Experiment
30. Mean (μ): a measure of central tendency.
μ = E{y}
Variance (σ2): a measure of how far a set of
numbers is spread out.
σ2 = V(y) = E{(y-μ)2}
Basic Statistical Concepts
31. If c is a constant and y is a random variable with the
mean of μ and variance of σ2, then
1. E(c) = c
2. E(y) = μ
3. E(cy) = c E(y) = cμ
4. V(c) = 0
5. V(y) = σ2
6. V(cy) = c2 V(y) = c2σ2
Basic Statistical Concepts
32. If y1 is a random variable with the mean of μ1 and
variance of σ1
2, and y2 is another random variable with
the mean of μ2 and variance of σ2
2, then
1. E(y1+y2) = E(y1) + E(y2) = μ1 + μ2
2. E(y1-y2) = E(y1) - E(y2) = μ1 - μ2
3. V(y1+y2) = V(y1) + V(y2) = σ1
2 + σ2
2 (for independent and 0 mean y1 and y2)
4. V(y1-y2) = V(y1) + V(y2) = σ1
2 + σ2
2 (for independent and 0 mean y1 and y2)
5. E(y1y2) = E(y1) E(y2) = μ1 μ2 (for independent y1 and y2)
Basic Statistical Concepts
33. Statistic: Statistical inference makes considerable
use of quantities computed from the observations
in the sample. We define a statistic as any function
of the observations in a sample that does not
contain unknown parameters:
1. Sample mean
2. Sample Variance
3. and even the random variable (quantity) itself!
Basic Statistical Concepts
34. Sample Mean (shown by y)
Sample Variance (shown by S2)
Basic Statistical Concepts
35. Find sample mean and sample
variance for each data set.
y1 = ?
y2 = ?
S1
2 = ?
S2
2 = ?
Basic Statistical Concepts
36. Sampling Distribution
The probability distribution of a statistic is called a
sampling distribution. Important examples are:
1. Normal distribution
2. Chi Square Distribution (Χ2 Distribution)
3. t Distribution
Basic Statistical Concepts
37. Normal Distribution
y ~ N (μ,σ2)
In general case, μ is the mean of the
distribution and σ is the standard
deviation.
An important special case is the
standard normal distribution, where
μ=0 and σ=1.
z = (y-μ)/σ has always an standard
normal distribution.
Basic Statistical Concepts
38. The Central Limit Theorem
If y1, y2, … yn is a sequence of n independent and
identically distributed random variables with E(yi)=
and V(yi)=2 and x= y1+ y2+ …+ yn then the following
random variable has standard normal distribution
zn=
Basic Statistical Concepts
n 2
x-n
39. Chi-Square Distribution
x ~ Xk
2
If x can be obtained as the sum
of the squares of k independent
normally distributed random
variables, then x follows the chi-
square distribution with k
degrees of freedom.
Basic Statistical Concepts
40. As an example of a random variable that follows the chi-
square distribution, suppose that y1, y2, …, yn is a random
sample from an N(μ,σ2) distribution. Then (SS=Sum of
Squares)
That is SS/σ2 is distributed as chi-square with n-1 degrees of
freedom.
Basic Statistical Concepts
41. Since S2 = SS/(n-1), then the distribution of S2 is
σ2 Xn-1
2
Thus, the sampling distribution of the sample
variance is a constant times the chi-square
distribution if the population is normally distributed.
Basic Statistical Concepts
n-1
S2 ~
42. t Distribution
If z~N(0,1) and Xk
2 is a ch-square
variable, then the random
variable tk
follows t distribution with k
degrees of freedom.
Basic Statistical Concepts
43. If y1,y2, …, yn is a random sample from the N(μ,σ2)
distribution, then the quantity
is distributed as t with n-1 degrees of freedom.
Basic Statistical Concepts
45. Factorial Experiments
Factors are varied together, instead of one at a time.
An special kind of statsitical experiment design.
22 Factorial Design (2 factors, each at 2 levels). For example:
Factorial Experiments
46. Example for 22 factorial design
- 8 sets
- replicated twice for each driver-ball combination
- Driver Effect?
Driver Effect = - = 3.25
That is, on average, switching from the oversized driver to
the regular sized deriver increases the score by 3.25 strokes
per round.
Factorial Experiments
92+94+93+91
4
88+91+88+90
4
47. - Ball Effect?
Ball Effect = -
= 0.75
That is, on average, switching from the balata ball to the three piece ball
increases the score by 0.75 strokes per round.
Factorial Experiments
88+91+92+94
4
88+90+93+91
4
48. - Driver-Ball Interaction Effect?
Driver-Ball Interaction Effect =
- = 0.25
That is, on average, switching of both ball and driver increases the score by
0.25 strokes per round.
Finally, one can concludes that
Driver effect > Ball effect > Intercation
Factorial Experiments
92+94+88+90
4
88+91+93+91
4
49. 23 Factorial Design (3 factors, each at 2 levels):
How to calculate ball-effect, driver effect, beverage
effect and interaction effects?
Factorial Experiments
50. Comparative experiments compare two experimental conditions. For
example, comparative experiments can be used to determine whether two
different formulations of a product give equivalent results.
Comparative Experiments
Apple three 1 (AKL) Apple three 2 (ChCH)
Apples weights:
0.101 kg
0.111
0.103
0.102
0.121
0.102
0.101
same cultivation conditions
Apples weights:
0.102 kg
0.101
0.105
0.106
0.111
0.98
0.110
Hypothesis: same apples weights?
51. Data Model for Comparative Experiments
The following model for each data set is considered:
yi = + i
is the mean of data set
i is assumed to be distributed by NID(0,2)
Comparative Experiments
Noise
52. Comparative Experiments Formulation:
In general case, we have two data sets:
y11, y12, …, y1
and n1
y21, y22, …, y2
The statistical hypothesis is formulated by
H0: 1 = 2
H1: 1 ≠ 2
Comparative Experiments
Null Hypothesis
Alternative
Hypothesis
n1
n2
53. Two Sample t-test
1. Assume that the variance of the two data sets are
equal: 1
2 = 2
2
2. Form the following statistic
Comparative Experiments
data set 1 sample mean
data set 2 sample mean
estimate of the common variance
54. Two Sample t-test
3. Assume To determine whether to reject H0 we would compare t0 to the t
distribution with n1+n2-2 degrees of freedom.
4. We would reject H0 if
Comparative Experiments
-
55. page(1/3)
Are the bond strength of the two cement
mortars similar at the significance level
of = 0.05?
Comparative Experiments
59. Exercise
Two machines are used for filling plastic bottles with a net volume of 16.0 ounces.
The filling process can be assumed to be normal. The quality engineering
department suspects that both machines fill to the same net volume. An experiment
is performed by taking a random sample from the output of each machine. Would
you reject or accept the quality engineering department hypothesis?
Comparative Experiments
60. P Value
The smallest level of significance that would lead to
the rejection.
Comparative Experiments
t0
v
Min
61. P Value Calculation for Previous Example
Comparative Experiments
t0=--2.2
V=18
Min = 0.0411
62. Confidence Interval
It is often preferable to provide an interval within
which the statistics in question would be expected to
lie. These interval statements are called confidence
intervals.
This interval estimates the difference between the
statistics and the accuracy of this estimate.
Comparative Experiments
64. Confidence Interval
L 1-2 U
1-2 = 0.5(L+U) 0.5(U-L)
It means the mean difference is 0.5(L+U) and the
accuracy of this estimate is 0.5(U-L).
If 0 is not in the interval H0 would be rejected.
Comparative Experiments
65. For the previous example:
The 95% confidence interval is
L = -0.55
U = -0.01
-0.55 1-2 -0.01 (1-2 =0 is not in the interval)
1-2= -0.28 0.27
It means the difference between the two mortars strengths is -0.28
with the accuracy of 0.27.
Comparative Experiments
66. For the previous example estimates the difference between the
tow mortars strengths and the accuracy of your estimation by
calculating the confidence interval of t-test.
The difference between the two mortars strengths is -0.28 with
the accuracy of 0.27.
Comparative Experiments
67. Some experiments involve comparing only one
population mean to a specified value, say,
H0: 1 = 0
H1: 1 ≠ 0
This problem is a simplified version of the two-sample t-test
problem, called one-sample Z test.
Comparative Experiments
68. 0ne-Sample Z-test
1. Assume that the variance of the sets is 2
2. Form the following statistic
3. If H0 is true, then the distribution of Z0 is N(0, 1).
Therefore, we would reject H0 if |Z0|>Z0.5
4. Z0.5 should be obtained from a table.
Comparative Experiments
70. In the population, the average IQ is 100 with a
standard deviation of 15. A team of scientists wants to
test a new medication to see if it has either a positive
or negative effect on intelligence, or no effect at all. A
sample of 30 participants who have taken the
medication has a mean of 140. Did the medication
affect intelligence, using alpha = 0.05?
Comparative Experiments
71. Comparative Experiments
1.96
If Z is less than -1.96, or greater than 1.96,
reject the null hypothesis.
-1.96
Result: Reject the null hypothesis.
Conclusion: Medication significantly
affected intelligence, z = 14.60, p < 0.05.
73. Find the confidence interval for the previous example.
L = 140 - 1.96 x 15 / √30 = 134.64
U= 140 + 1.96 x 15 / √30 = 145.36
140 ± 5.36
Comparative Experiments
74. Violation of Assumptions in t-test
Two main assumptions are:
1. Normal distribution: In practice, the assumption of
normal distribution can be violated to some extent
without affecting the effectiveness of t-test.
2. Equal variance: If this assumption is violated,
other test techniques should be used.
Comparative Experiments
Editor's Notes
Playing in the morning or playing in the afternoon
Playing when it is cool or playing when it is hot
The type of golf shoe spike worn (metal or soft)
Playing on a windy day or playing on a calm day.