A case study that explains how quality of data is much better in case of online surveys, with guidelines on how sampling and non-sampling errors are eliminated.
How many people do I need to survey? How many is too many. What are the costs v the benefits. Determining sample size --- the correct sample--- is the foundation for great surveys and part of your overall market research strategy.
How many people do I need to survey? How many is too many. What are the costs v the benefits. Determining sample size --- the correct sample--- is the foundation for great surveys and part of your overall market research strategy.
This presentation will address the issue of sample size determination for social sciences. A simple example is provided for every to understand and explain the sample size determination.
Power Analysis: Determining Sample Size for Quantitative StudiesStatistics Solutions
In this webinar, we go over how to determine the appropriate sample size for a quantitative study by using power analysis. The presentation includes an explanation of what a power analysis is and examples of how to conduct power analyses for common statistical tests. The presentation focuses on power analysis using G*Power and Intellectus Statistics software programs. Sample size calculations for more advanced analyses are briefly discussed.
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest (for example, a sample mean).
Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically not observable, and hence the statistical error cannot be observed either.
Here is a piece of detailed information about the experimental design used in the field of statistics. This also features some information on the three most widely accepted and most widely used designs.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
This presentation will address the issue of sample size determination for social sciences. A simple example is provided for every to understand and explain the sample size determination.
Power Analysis: Determining Sample Size for Quantitative StudiesStatistics Solutions
In this webinar, we go over how to determine the appropriate sample size for a quantitative study by using power analysis. The presentation includes an explanation of what a power analysis is and examples of how to conduct power analyses for common statistical tests. The presentation focuses on power analysis using G*Power and Intellectus Statistics software programs. Sample size calculations for more advanced analyses are briefly discussed.
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest (for example, a sample mean).
Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically not observable, and hence the statistical error cannot be observed either.
Here is a piece of detailed information about the experimental design used in the field of statistics. This also features some information on the three most widely accepted and most widely used designs.
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Introduction to Sampling
When to sample
Representative sample When to sample
How to guarantee a representative sample
Random , Systematic , Stratified , Clustered
Sampling Method
When to use Stratified Sampling
Sampling Bias/ Avoid Sampling Bias
The cost and ease of obtaining samples
Time constraints
Unknown characteristics of the population
Common Segmentation Factors - Common Segmentation Factors
What type - When - Where - Who
How Do I Determine Sample Size
Level of confidence
Precision or accuracy (∆)
Standard deviation of the population (σ), “How much variation is in the total data population”
An estimate of standard deviation is needed to start. As standard deviation increases, a larger sample size is needed to obtain reliable results
Sample Size For Continuous Data
Consider the following example:
We want to estimate average call length in handling customer inquiries, and we want our estimate to be accurate to within 1 minute. Based on a small random sample of 30 inquiries we know that the variation in call length, as measured by standard deviation, is 5 minutes. We want to have 95% confidence that the estimate will be in the range of specified accuracy – i.e., 1 minute.
Therefore, from the statistical theory we can answer according to the formula
Where n = sample size, u = standard deviation and ∆= degree of precision required. In our example, the required sample size is:
n = [(1.96*5)/1] 2 = 96.04 or 96 samples
Extending the same logic, we can find out the sample size required while dealing with discrete population
If the average population proportion non-defective is at ‘p’, population standard deviation can be calculated as
Sampling is the process of:
Collecting only a portion of the data that is available or could be available & drawing conclusions about the total population (statistical inference )
Audit sampling help auditors on doing their audit work at given period time
Sampling provides a good alternative to collect data in an effective and efficient manner
Sampling is the process of collecting a portion or subset of the total data that may be available.
All of the data available is often referred to as a Population (N).
The purpose of sampling is to draw conclusions about the population using the sample (n). This is know as statistical inference.
One of the first questions to ask is ‘Do I need to sample?” The major reason sampling is done is for efficiency reasons-it is often too costly or time consuming to measure all of the data. Sampling provides a good alternative to collect data in an effective and efficient manner. If the circumstances surrounding the data collection plan do not justify sampling, then sampling should not be done. This is often the case in low volume processes.
All items in the population have an equal chance of being chosen in the sample
Example: A customer satisfaction survey team picking the customers to be contacted at random
How to do random sampling
Generate random numbers from
Chapter 7
Estimation
Chapter Learning Objectives
1. Explain the concepts of estimation, point estimates, confidence level, and confidence interval
2. Calculate and interpret confidence intervals for means
3. Describe the concept of risk and how to reduce it
4. Calculate and interpret confidence intervals for proportions
In this chapter, we discuss the procedures involved in estimating population means and proportions based on the
principles of sampling and statistical inference discussed in Chapter 6 (“Sampling and Sampling Distributions”).
Knowledge about the sampling distribution allows us to estimate population means and proportions from sample
outcomes and to assess the accuracy of these estimates. Consider three examples of information derived from samples.
Example 1: Based on a random sample of 1,019 U.S. adults, a March 2016 Gallup poll found that the percentage of
Americans who identify as environmentalists has decreased. Compared with the 1991 high of 78%, in 2016 only 42% of
Americans self-identified as environmentalists. In its report, Gallup attributed the decline to several factors, including the
adoption of routine environmental friendly practices and the politicization of environmental issues.1
Example 2: Every other year, the National Opinion Research Center conducts the General Social Survey (GSS) on a
representative sample of about 1,500 respondents. The GSS, from which many of the examples in this book are selected,
is designed to provide social science researchers with a readily accessible database of socially relevant attitudes,
behaviors, and attributes of a cross-section of the U.S. adult population. For example, in analyzing the responses to the
2014 GSS, researchers found that the average respondent’s education was about 13.77 years. This average probably
differs from the average of the population from which the GSS sample was drawn. However, we can establish that in most
cases the sample mean (in this case, 13.77 years) is fairly close to the actual true average in the population.
Example 3: In 2016, North Carolina legislators passed House Bill 2, prohibiting transgender people from using bathrooms
and locker rooms that do not match the gender on their birth certificate. The law quickly drew protests from civil rights and
https://jigsaw.vitalsource.com/books/9781506347219/epub/OEBPS/s9781506347189.i1831.xhtml
https://jigsaw.vitalsource.com/books/9781506347219/epub/OEBPS/s9781506347189.i3911.xhtml#s9781506347189.i4044
LGBT (lesbian, gay, bisexual, transgender) rights groups. A CNN/ORC poll of 1,001 Americans revealed that 39% of
those surveyed strongly oppose laws that require transgender individuals to use restroom facilities that correspond on
their gender at birth rather than their gender identity. Seventy-five percent favor laws guaranteeing equal protection for
transgender people in jobs, housing, and public accomodations.2
The percentage of Americans who identify as environ ...
What is sampling?
Sampling is the act, process, or technique of selecting a suitable sample, or a representative part of a population for the purpose of determining parameters or characteristics of the whole population.
Characteristics of a good sample
-True representative
-Free from bias
-Accurate
-Comprehensive
-Approachable
-Good size
-Feasible
-Goal orientation
-Practical and economical
Sampling Error
A sampling error is a statistical error that occurs when an analyst does not select a sample that represents the entire population of data and the results found in the sample do not represent the results that would be obtained from the entire population.
and many more things about the sampling technique.
Marketing Research Project on T test and Sample Designing, Detail Analysis of all the aspect of T test and usage of all the tools for finding out the different variants.
This presentation is meant to help choose the appropriate statistical analysis for IBDP Biology IAs. It was created as support for teachers but also useful for students.
Within the presentation, we discuss different types of biological data, and how to describe and analyse it using mathematics.
10 Video Ideas Any Business Can Make RIGHT NOW!
You'll never draw a blank again on what kind of video to make for your business. Go beyond the basic categories and truly reimagine a brand new advanced way to brainstorm video content creation. During this masterclass you'll be challenged to think creatively and outside of the box and view your videos through lenses you may have never thought of previously. It's guaranteed that you'll leave with more than 10 video ideas, but I like to under-promise and over-deliver. Don't miss this session.
Key Takeaways:
How to use the Video Matrix
How to use additional "Lenses"
Where to source original video ideas
SEO as the Backbone of Digital MarketingFelipe Bazon
In this talk Felipe Bazon will share how him and his team at Hedgehog Digital share our journey of making C-Levels alike, specially CMOS realize that SEO is the backbone of digital marketing by showing how SEO can contribute to brand awareness, reputation and authority and above all how to use SEO to create more robust global marketing strategies.
Videos are more engaging, more memorable, and more popular than any other type of content out there. That’s why it’s estimated that 82% of consumer traffic will come from videos by 2025.
And with videos evolving from landscape to portrait and experts promoting shorter clips, one thing remains constant – our brains LOVE videos.
So is there science behind what makes people absolutely irresistible on camera?
The answer: definitely yes.
In this jam-packed session with Stephanie Garcia, you’ll get your hands on a steal-worthy guide that uncovers the art and science to being irresistible on camera. From body language to words that convert, she’ll show you how to captivate on command so that viewers are excited and ready to take action.
Most small businesses struggle to see marketing results. In this session, we will eliminate any confusion about what to do next, solving your marketing problems so your business can thrive. You’ll learn how to create a foundational marketing OS (operating system) based on neuroscience and backed by real-world results. You’ll be taught how to develop deep customer connections, and how to have your CRM dynamically segment and sell at any stage in the customer’s journey. By the end of the session, you’ll remove confusion and chaos and replace it with clarity and confidence for long-term marketing success.
Key Takeaways:
• Uncover the power of a foundational marketing system that dynamically communicates with prospects and customers on autopilot.
• Harness neuroscience and Tribal Alignment to transform your communication strategies, turning potential clients into fans and those fans into loyal customers.
• Discover the art of automated segmentation, pinpointing your most lucrative customers and identifying the optimal moments for successful conversions.
• Streamline your business with a content production plan that eliminates guesswork, wasted time, and money.
Short video marketing has sweeped the nation and is the fastest way to build an online brand on social media in 2024. In this session you will learn:- What is short video marketing- Which platforms work best for your business- Content strategies that are on brand for your business- How to sell organically without paying for ads.
The digital marketing industry is changing faster than ever and those who don’t adapt with the times are losing market share. Where should marketers be focusing their efforts? What strategies are the experts seeing get the best results? Get up-to-speed with the latest industry insights, trends and predictions for the future in this panel discussion with some leading digital marketing experts.
SMM Cheap - No. 1 SMM panel in the worldsmmpanel567
Boost your social media marketing with our SMM Panel services offering SMM Cheap services! Get cost-effective services for your business and increase followers, likes, and engagement across all social media platforms. Get affordable services perfect for businesses and influencers looking to increase their social proof. See how cheap SMM strategies can help improve your social media presence and be a pro at the social media game.
Digital marketing is the art and science of promoting products or services using digital channels to reach and engage with potential customers. It encompasses a wide range of online tactics and strategies aimed at increasing brand visibility, driving website traffic, generating leads, and ultimately, converting those leads into customers.
https://nidmindia.com/
When most people in the industry talk about online or digital reputation management, what they're really saying is Google search and PPC. And it's usually reactive, left dealing with the aftermath of negative information published somewhere online. That's outdated. It leaves executives, organizations and other high-profile individuals at a high risk of a digital reputation attack that spans channels and tactics. But the tools needed to safeguard against an attack are more cybersecurity-oriented than most marketing and communications professionals can manage. Business leaders Leaders grasp the importance; 83% of executives place reputation in their top five areas of risk, yet only 23% are confident in their ability to address it. To succeed in 2024 and beyond, you need to turn online reputation on its axis and think like an attacker.\
Key Takeaways:
- New framework for examining and safeguarding an online reputation
- Tools and techniques to keep you a step ahead
- Practical examples that demonstrate when to act, how to act and how to recover
AI-Powered Personalization: Principles, Use Cases, and Its Impact on CROVWO
In today’s era of AI, personalization is more than just a trend—it’s a fundamental strategy that unlocks numerous opportunities.
When done effectively, personalization builds trust, loyalty, and satisfaction among your users—key factors for business success. However, relying solely on AI capabilities isn’t enough. You need to anchor your approach in solid principles, understand your users’ context, and master the art of persuasion.
Join us as Sarjak Patel and Naitry Saggu from 3rd Eye Consulting unveil a transformative framework. This approach seamlessly integrates your unique context, consumer insights, and conversion goals, paving the way for unparalleled success in personalization.
Mastering Local SEO for Service Businesses in the AI Era is tailored specifically for local service providers like plumbers, dentists, and others seeking to dominate their local search landscape. This session delves into leveraging AI advancements to enhance your online visibility and search rankings through the Content Factory model, designed for creating high-impact, SEO-driven content. Discover the Dollar-a-Day advertising strategy, a cost-effective approach to boost your local SEO efforts and attract more customers with minimal investment. Gain practical insights on optimizing your online presence to meet the specific needs of local service seekers, ensuring your business not only appears but stands out in local searches. This concise, action-oriented workshop is your roadmap to navigating the complexities of digital marketing in the AI age, driving more leads, conversions, and ultimately, success for your local service business.
Key Takeaways:
Embrace AI for Local SEO: Learn to harness the power of AI technologies to optimize your website and content for local search. Understand the pivotal role AI plays in analyzing search trends and consumer behavior, enabling you to tailor your SEO strategies to meet the specific demands of your target local audience. Leverage the Content Factory Model: Discover the step-by-step process of creating SEO-optimized content at scale. This approach ensures a steady stream of high-quality content that engages local customers and boosts your search rankings. Get an action guide on implementing this model, complete with templates and scheduling strategies to maintain a consistent online presence. Maximize ROI with Dollar-a-Day Advertising: Dive into the cost-effective Dollar-a-Day advertising strategy that amplifies your visibility in local searches without breaking the bank. Learn how to strategically allocate your budget across platforms to target potential local customers effectively. The session includes an action guide on setting up, monitoring, and optimizing your ad campaigns to ensure maximum impact with minimal investment.
Core Web Vitals SEO Workshop - improve your performance [pdf]Peter Mead
Core Web Vitals to improve your website performance for better SEO results with CWV.
CWV Topics include:
- Understanding the latest Core Web Vitals including the significance of LCP, INP and CLS + their impact on SEO
- Optimisation techniques from our experts on how to improve your CWV on platforms like WordPress and WP Engine
- The impact of user experience and SEO
When most people in the industry talk about online or digital reputation management, what they're really saying is Google search and PPC. And it's usually reactive, left dealing with the aftermath of negative information published somewhere online. That's outdated. It leaves executives, organizations and other high-profile individuals at a high risk of a digital reputation attack that spans channels and tactics. But the tools needed to safeguard against an attack are more cybersecurity-oriented than most marketing and communications professionals can manage. Business leaders Leaders grasp the importance; 83% of executives place reputation in their top five areas of risk, yet only 23% are confident in their ability to address it. To succeed in 2024 and beyond, you need to turn online reputation on its axis and think like an attacker.
Key Takeaways:
- New framework for examining and safeguarding an online reputation
- Tools and techniques to keep you a step ahead
- Practical examples that demonstrate when to act, how to act and how to recover
Most small businesses struggle to see marketing results. In this session, we will eliminate any confusion about what to do next, solving your marketing problems so your business can thrive. You’ll learn how to create a foundational marketing OS (operating system) based on neuroscience and backed by real-world results. You’ll be taught how to develop deep customer connections, and how to have your CRM dynamically segment and sell at any stage in the customer’s journey. By the end of the session, you’ll remove confusion and chaos and replace it with clarity and confidence for long-term marketing success.
Key Takeaways:
• Uncover the power of a foundational marketing system that dynamically communicates with prospects and customers on autopilot.
• Harness neuroscience and Tribal Alignment to transform your communication strategies, turning potential clients into fans and those fans into loyal customers.
• Discover the art of automated segmentation, pinpointing your most lucrative customers and identifying the optimal moments for successful conversions.
• Streamline your business with a content production plan that eliminates guesswork, wasted time, and money.
Your Path to Profits - The Game-Changing Power of a Marketing - Daniel Bussius
Quality of data
1. Quality of data is definitely
better in case of online surveys
2. Types of errors
There are two kinds of errors that can creep in during a
survey – sampling errors and non-sampling (human)
errors.
2
3. Sampling errors
Sampling errors are those that occur when the
statistical characteristics of a population are estimated
from a sample of that population.
A way to lower this error is to have randomized
sampling. Now, in online surveys, the number of
contacts is really high, and with low incidence rates and
low completion rates, the level of randomness that is
achieved is really not possible in an offline study.
3
4. Sampling errors
Also, if required, we do a process known as “weighting”.
.
Every year, we conduct a baseline study covering 109 urban
centres, 196 villages, 80 out of 88 NSSO regions, covering 30,066
households and 1,21,311 individuals, covering 28 states and 4 UTs.
Using this baseline study “Juxt India Consumer Landscape”, we
create a matrix of unique weights for each age-gender-location
combination.
Using this matrix, we can project the data for any survey to a
nationwide population and remove the sampling error and the selfselection bias also in this weighting process.
4
5. Non-sampling (human/system) errors
In an offline study, the questionnaire administration is
done by a human, who reads it out in his
interpretation, which may result in bias and errors.
However, in the online study, it is the respondent's
interpretation, which is why we use extremely simple
english, and the survey can even be done in local
languages, thus removing this non-sampling error.
5
6. Non-sampling (human/system) errors
Now, for there can be “bad respondents” also. So, to “clean” this
data,
We clear out the junk respondents, we just don‟t believe in
„response cleaning‟, we delete the case/respondent itself
We remove all the “straight liners”, respondents who fill the
surveys in patterns
We also do “mode time cleaning”. The completion times for
majority of responses fall within the 2/3 to 4/3 region of the mode
time, this can be flexible depending on type of questionnaire.
Outliers outside this band are discarded. A sample of the mode
time cleaning can be seen in the next slide.
6
7. Typical scatter plot of survey response times
Time
30
Mode time (most commo
occurring completion
time) – 13 minutes
Clean Outliers lying outside
4/3rd of mode time
25
20
Time
15
Most of responses occur
within 2/3rd and 4/3rd of
mode time
10
5
0
200
400
600
800
Clean Outliers lying
outside 2/3rd of mode
1000
1200
time
7
8. Normality, reliability and validity tests
There are also some tests that can be done at
client‟s request for ensuring statistical validity
of data. Let us see them one by one.
8
9. Normality Test
The objective of sample normality tests is to ensure the sample is
normally distributed and randomly selected.
It is important that the normality of the sample will be confirmed
before subjecting it to inferential and differential analyses.
Let us take the example of a normality test on the age of
respondents
9
10. Histogram – graphical method
An initial impression of the normality of the distribution can be gained by
examining the histogram. From the above Figure, it is evident that the
collected data (of age) is very near to normally distributed curve.
10
11. Normal Q-Q Plot of Age
In this Normal Q-Q plot, if the variable were normally distributed, the dots
would fit the line very closely. In this case, the points in the upper right of
the chart indicate the some skewing caused by the extremely large data
values, otherwise data seems to be normally distributed.
11
12. Reliability test
It is the extent to which a measuring procedure
yields consistent results on repeated
administrations of the scale.
The objective of the reliability test is to ensure
that the measurable items of each variable
were measuring the same underlying construct.
The reliability test of this instrument will be
examined through Cronbach‟s Alpha
Coefficient.
12
13. Cronbach alpha (α)
The average of all possible split-half‟ correlation coefficients
resulting from different ways of splitting the scale items
It‟s value varies from 0 to 1
α < 0.6 indicates unsatisfactory internal consistency reliability (see
Malhotra & Birks, 2007, p.358)
Note: alpha tends to increase with an increase in the number of
items in scale
The Cronbach alpha reliability coefficient for the choice factors
scale (in our sample questionnaire) as a whole was
0.78071, indicating that the scale as a whole has acceptable
internal consistency and reliability and no items were deleted.
13
14. Validity test
While the reliability test is necessary, it is not sufficient
The objective of the validity test is to identify whether
the proposed items in a study are valid for measuring
the underlying concept, how accurately the concept
corresponds to the real world
In a test case, the concept referred to the respondents‟
perceived importance of factors influencing their
intention to study at X
14
15. Sample validity test
Importance of the aspects related to content & structure of course offered
a12_7
Correlations
a12_1
a12_4
a12_2
a12_5
a12_6
a12_3
1.00
-0.07
-0.06
0.00
-0.09
-0.17
-0.12
-0.07
Adaptability
to
professional
environment (a12_7)
1.00
-0.05
-0.18
-0.13
0.04
-0.21
-0.06
-0.05
1.00
-0.17
-0.12
-0.33
-0.16
0.00
-0.18
-0.17
1.00
0.01
-0.11
-0.28
-0.09
-0.13
-0.12
0.01
1.00
-0.25
-0.26
-0.17
0.04
-0.33
-0.11
-0.25
1.00
-0.06
-0.12
-0.21
-0.16
-0.28
-0.26
-0.06
1.00
Reasonableness of the
minimum qualification
requirement (a12_1)
Specialized programs in
the offing (a12_4)
Range
of
courses
offered (a12_2)
Reasonableness of the
course duration (a12_5)
Topicality of course
content (a12_6)
Flexibility in selection of
course (a12_3)
15
16. Validity test
The questionnaire for the test study was
developed using choice factors from similar
studies as a point of reference, which was then
adapted to the Indian context and in fact
correlation between the factors was minimum
Thus, the content validity of the questionnaire
was addressed
16