This document provides an overview of survey and correlational research methods. It defines survey research as collecting data using instruments like questionnaires to answer questions about people's opinions or characteristics. The main purposes of surveys are to gather information about groups and sample populations. Correlational research determines if and how strongly two or more variables are related by calculating correlation coefficients. Relationship studies explore factors related to complex variables, while prediction studies use correlations to predict outcomes. The document outlines different survey and correlational research designs, procedures, analyses, and considerations.
Research Design: Quantitative, Qualitative and Mixed Methods DesignThiyagu K
A Research Design is simply a structural framework of various research methods as well as techniques that are utilized by a researcher. This presentation slides explain the resign design of quantitative, qualitative, and mixed-method design.
#strategies of inquiry
#presentation
#introduction to research methodology
#English
#knowledge
#education
#teaching
#helping material
#students
#schools
#communication
Grounded Theory: A specific methodology developed by Glaser and Strauss (1967) for the purpose of building theory from data. In their book the term grounded theory is used in a more sense to denote theoretical constructs derived form qualitative analysis of data.
Research Design: Quantitative, Qualitative and Mixed Methods DesignThiyagu K
A Research Design is simply a structural framework of various research methods as well as techniques that are utilized by a researcher. This presentation slides explain the resign design of quantitative, qualitative, and mixed-method design.
#strategies of inquiry
#presentation
#introduction to research methodology
#English
#knowledge
#education
#teaching
#helping material
#students
#schools
#communication
Grounded Theory: A specific methodology developed by Glaser and Strauss (1967) for the purpose of building theory from data. In their book the term grounded theory is used in a more sense to denote theoretical constructs derived form qualitative analysis of data.
το σχολείο μας είχε απευθείας σύνδεση με το ερευνητικό κέντρο CERN.
Αυτό έγινε μετά από συνεργασία με την Ελληνογερμανική Αγωγή, ο κ. Σωτηρίου παραβρέθηκε και μίλησε στην ολομέλεια.
This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods seminar taught by Mike Palmquist in the 1990s and 2000s at the Colorado State University.
Introduction
In life, there are universal laws that govern everything we do. These laws are so perfect that if you were to align yourself with them, you could have so much prosperity that it would be coming out of your ears. This is because God created the universe in the image and likeness of him. It is failure to follow the universal laws that causes one to fail. The laws that were created consisted of the following: ·
Law of Gratitude: The Law of Gratitude states that you must show gratitude for what you have. By having gratitude, you speed your growth and success faster than you normally would. This is because if you appreciate the things you have, even if they are small things, you are open to receiving more.
Law of Attraction: The Law of Attraction states that if you focus your attention on something long enough you will get it. It all starts in the mind. You think of something and when you think of it, you manifest that in your life. This could be a mental picture of a check or actual cash, but you think about it with an image.
Law of Karma: the Law of Karma states that if you go out and do something bad, it will come back to you with something bad. If you do well for others, good things happen to you. The principle here is to know you can create good or bad through your actions. There will always be an effect no matter what.
Law of Love: the Law of Love states that love is more than emotion or feeling; it is energy. It has substance and can be felt. Love is also considered acceptance of oneself or others. This means that no matter what you do in life if you do not approach or leave the situation out of love, it won't work.
Law of Allowing: The Law of Allowing states that for us to get what we want, we must be receptive to it. We can't merely say to the Universe that we want something if we don't allow ourselves to receive it. This will defeat our purpose for wanting it in the first place.
Law of Vibration: the Law of Vibration states that if you wish on something and use your thoughts to visualize it, you are halfway there to get it. To complete the cycle you must use the Law of Vibration to feel part of what you want. Do this and you'll have anything you want in life.
For everything to function properly there has to be structure. Without structure, our world, or universe, would be in utter chaos. Successful people understand universal laws and apply them daily. They may not acknowledge that to you, but they do follow the laws. There is a higher power and this higher power controls the universe and what we get out of it. People who know this, but wish to direct their own lives, follow the reasons. Successful people don't sit around and say "I'll try," they say yes and act on it.
Chapter - 1
The Law of Attraction
The law of attraction is the most powerful force in the universe. If you work against it, it can only bring you pain and misery. Successful people know this but have kept it hidden from the lower class for centuries because th
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
3. Definition
SURVEY
RESEARCH involves
collecting data to test
hypotheses /to Answer Q
about people’s opinions on
some topics or issue.
A SURVEY = instrument to
collect data that describe
one or more characteristics of
a specific population.
4. Purpose
Gather information about groups beliefs,
attitudes, behaviour, and demographic
composition.
SAMPLE SURVEY: researcher attempts to infer
information about population based on a
representative sample drawn from that
population.
CENSUS SURVEY: researcher attempts to
acquire information from every member of a
population.
6. Cross-Sectional Surveys
Data
are collected from selected
individuals at a single point in time.
Single, stand alone study.
Effective for providing a snapshot of a
current behaviours, attitudes, and beliefs
in a populations.
Provide data quickly.
7. Longitudinal Surveys
Data
are collected at 2 or more times.
Require an extended commitment by the
researcher and participants.
Longitudinal
Trend
survey
Cohort
Survey
Panel
Survey
Follow-up
Survey
8. Trend Survey
Examines
changes over time in a
particular population defined by some
particular trait/traits.
Researcher can analyze changes in
attitudes, beliefs, behaviours within that
particular population over time.
9. Cohort Survey
Involves
one population selected at a
particular time period but multiple
samples taken and surveyed at different
points of time.
Can be different samples, but in the same
population.
10. Panel Survey
The
same individuals are studies over time.
Frequent problem: lost of individuals from
the study because of relocation, name
change, lack of interest, or death.
Take long time.
11. Follow-Up Survey
Addresses
development or change in a
previously studied population, some time
after the original survey was given.
To examine changes in the attitudes,
behaviours or beliefs.
12. Conducting Survey Research
Aim: collection of standardized, quantifiable
information from all members of a population
or a sample.
The researcher must ask them each the same
question.
A questionnaire is written collection of survey
questions to be answered by a selected
group of research participants.
An interview is an oral in-person question-and
answer.
13. Conducting a Questionnaire
Study
-Stating the problem
The problem or topic studied and the
contents must be sufficient significance to
motivate potential respondents to
respond, and to justify the research effort.
Researcher should set specific objectives
indicating the kind of information needed.
14. Continued…
Constructing The Questionnaire
Questionnaire should be attractive, brief, and
easy to respond to
Identify the sub-areas of research topics to
make the process of developing the
questionnaire easier.
Types of items: scaled, ranked, checklist, free
response. Pg187
Include only items that relate to RO
Collect demographic information about the
sample if you want to compare to different
subgroups.
Focus each question on a single concept
15. Continued…
-Things to consider
Define/explain ambiguous terms.
Include a point of reference to guide
respondents in answering questions.
Avoid leading questions.
Avoid sensitive questions.
Don’t ask a questions that assumes a fact
that not necessarily true.
-Pilot Test the Questionnaire
Provide information about deficiencies
and suggestion for improvement.
16. Continued…
Choose
2 or 3 individuals who are
thoughtful, critical, and similar to the
intended research participants.
-Preparing Cover Letter
When necessary.
17. Conducting The Questionnaire
Select
participants: simple/stratified
random, cluster, systematic, nonrandom.
Distributing: mail, email, telephone.],
personal admin, interview. pg191
Conduct follow-up for reminder.
Rule of thumbs: must be more than 50%.
Dealing with nonresponse: get new
participants OR make assumptions by
generalization.
Analyzing Results: Select total sample size.
19. Definition
Involves collecting data to determine
whether, and to what degree, a
relationship exists between two or more
quantifiable variables.
The degree of relation is expressed as a
correlation coefficient. i.e. if two variables
are related, scores within certain range
on one variable are associated with the
other variable.
20. Purpose
To determine relations among variables
(i.e. relationship study) or to use these
relations to make predictions (i.e.
prediction study)
To determine various types of validity and
reliability.
21. Problem Selection
Variables to be correlated should be
selected on the basis of some rationale. It
should be a logical one.
“Treasure hunts”- the researcher
correlates all sorts of variables to see what
turns up are strongly discourage (cause
inefficiency and findings difficult to
interpret).
23. Design and Procedure
Scores for two (or more) variables of
interest are obtained for each member of
the sample, and the paired scores are
then correlated.
The result is expressed as a correlation
coefficient that indicates the degree of
relation between the two variables.
24. Data Analysis and
Interpretation
When two variables are correlated, the result
is a correlation coefficient, which is a decimal
number ranging from -.00 to +1.00
i.e. a person with a high score on one of the
variables is likely to have a high score on the
other variable, and a person with a low score
on one variable is likely to have low score on
the other.
25. Relationship Studies
Is when: a researcher attempts to gain insight
into variables or factors that are related to a
complex variables (e.g. academic
achievement, motivation, and self concept)
26. Purpose
1.
2.
They help to identify related variables suitable
for subsequent examination in causalcomparative and experimental studies.
Relationship study provide information about
the variables to control for in causalcomparative and experimental studies.
27. PREDICTION STUDIES
o
If two variables are highly related, scores on one
can be used to predict scores on the other.
o
The variable used to predict is called predictor. E.g
: high school grades or certification exam.
o
The variable is predicted is a complex variable
called the criterion. E.g : college grades or
principals’ evaluations.
o
Prediction study is an attempt to determine which
of a number of variables are most highly related to
the criterion variable.
o
Are conducted to facilitate decision making
about individuals.
28. o
To aid in various types of selection.
o
To
determine
the
predictive
validity
of
measuring
instruments
o
The results of prediction studies are used not only by
researchers but also by counselors, admissions directors
and employers.
o
More than one variable can be used to make predictions.
o
A combination of variables will be more accurate.
29. Data collection
o
In all correlational studies, research participants
must be able to provide the desired data and
must be available to the researcher.
o
Valid measuring instruments should be selected to
represent the variables.
o
It is especially important that the measure used for
the criterion variable be valid.
30. PREDICTION STUDY
RELATIONSHIP STUDY
Predictor variables are All variables are collected
generally obtained earlier within a relatively short
than the criterion variable. period of time.
An
interesting
characteristic is shrinkage,
the tendency for the
prediction to be less
accurate for a group
other than the one on
which it was originally
developed.
31. Data Analysis and Interpretation
Data analysis in prediction studies involves correlating
each predictor variable with the criterion variable.
o For single variable predictions, the form of the prediction
equation is:
Y = a + bX
where
Y = the predicted criterion score for an individual
X = an individual’s score on the predictor variable
a = a constant calculated from the scores of all
participants
b = a coefficient that indicates the contribution of the
predictor variable to the criterion variable
o
32. Continued…
o
o
o
Because a combination of variables usually
results in a more accurate prediction than
any one variable, a prediction study often
results in a multiple regression equation.
A multiple regression equation, also called a
multiple prediction equation, is a prediction
equation including two or more variables that
individually predict a criterion, resulting in a
more accurate prediction.
An intervening variable, a variable that
cannot be directly observed or controlled,
can influence the link between predictor and
criterion variables.
33. OTHER CORRELATION-BASED
ANALYSES
More complex correlation-based analyses include:
o
o
o
o
o
Discriminant function analysis, which is quite similar to
multiple regression analysis with one major difference;
continuous predictor variables are used to predict a
categorical variable.
Canonical analysis is an extension of multiple regression
analysis. It produces a correlation based on a group of
predictor variables and a group of criterion variables.
Path analysis also allows us to see the relations and
patterns among a number of variables. The outcome is a
diagram that shows how variables are related to one
another.
An extension of path analysis that is more sophisticated
and powerful is called structural equation modeling; or
LISREL, clarifies the direct and indirect interrelations among
variables relative to a given variable, but it provides more
theoretical validity and statistical precision in the diagram
it produces.
Factor analysis.
34. PROBLEMS TO CONSIDER IN
INTERPRETING CORRELATION
COEFFICIENTS
o
1.
2.
3.
4.
5.
The quality of the information provided in correlation
coefficients depends on the date they are calculated from.
It is important to ask the following questions when
interpreting correlation coefficients:
Was the proper correlation method used to calculate the
correlation?
Do the variables have high reliabilities?
Is the validity of the variables strong? Invalid variables
produce meaningless results.
Is the range of scores to be correlated restricted or
extended? Narrow or restricted score ranges lower
correlation coefficients, whereas broad or extended score
ranges raise them.
How large is the sample? The larger the sample, the
smaller the value needed to reach statistical significance.
Large samples may show correlations that are statistically
significant but practically unimportant