The document discusses important statistical terms related to sampling. It defines population as the set of all measurements of interest to the researcher, and sample as a subset of the population. Sampling is done to get information about large populations at lower cost and with greater accuracy compared to studying the entire population. The document outlines different types of sampling methods including probability and non-probability sampling, and provides examples like simple random sampling, systematic sampling, and cluster sampling. It also discusses factors like sampling size, sampling error, and type 1 and type 2 errors.
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
In the presentation, hypothesis test has been explained with scrap. Tree diagram is there to understand in which situation u can apply which parametric test
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
In the presentation, hypothesis test has been explained with scrap. Tree diagram is there to understand in which situation u can apply which parametric test
A presentation meant for non-statisticians on statistics and general statistical analysis. Basically provides a short overview of the processes involved in data collection, storage, hypothesis generation and statistical analysis. It does not deal with bayesian statistics. Presented at PRODVANCE 2016 Ahmedabad
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
5 essential steps for sample size determination in clinical trials slidesharenQuery
In this free webinar hosted by nQuery Researcher & Statistician Eimear Keyes, we map out the 5 essential steps for sample size determination in clinical trials. At each step, Eimear will highlight the important function it plays and how to avoid the errors that will negatively impact your sample size determination and therefore your study.
Watch the Video: https://www.statsols.com/webinar/the-5-essential-steps-for-sample-size-determination
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
2.0.statistical methods and determination of sample sizesalummkata1
statistical methods and determination of sample size
These guidelines focus on the validation of the bioanalytical methods generating quantitative concentration data used for pharmacokinetic and toxicokinetic parameter determinations.
Combination of informative biomarkers in small pilot studies and estimation ...LEGATO project
Background:
Biomarker candidates are defined as measurable molecules found in biological media. According to Biomarkers Definitions Working Group, 2001, biomarkers cover a rather wide range of parameters. Recently, biomarkers are used widely in medical researches, where single biomarkers may not possess the desired cause-effect association for disease classification and outcome prediction. Therefore the efforts of the researchers currently is to combine biomarkers. By new technologies like microarrays, next generation sequencing and mass spectrometry, researchers can obtain many biomarker candidates that can exceed tens of thousands. To avoid wasting money and time, it is suggested to control the number of patients strictly. However, pilot studies usually have low statistical power which reduces the chance of detecting a true effect .
Sample size calculation in medical researchKannan Iyanar
A short description on estimation of sample size in health care research. It describes the basic concepts in sample size estimation and various important formulae used for it.
Sample size and how to calculate it
- Why sample size is important
- Alpha and beta errors
- Main outcome and Effect size
- Practical examples using Means-Proportions-Correlation- Confidence Interval
A presentation meant for non-statisticians on statistics and general statistical analysis. Basically provides a short overview of the processes involved in data collection, storage, hypothesis generation and statistical analysis. It does not deal with bayesian statistics. Presented at PRODVANCE 2016 Ahmedabad
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
5 essential steps for sample size determination in clinical trials slidesharenQuery
In this free webinar hosted by nQuery Researcher & Statistician Eimear Keyes, we map out the 5 essential steps for sample size determination in clinical trials. At each step, Eimear will highlight the important function it plays and how to avoid the errors that will negatively impact your sample size determination and therefore your study.
Watch the Video: https://www.statsols.com/webinar/the-5-essential-steps-for-sample-size-determination
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
2.0.statistical methods and determination of sample sizesalummkata1
statistical methods and determination of sample size
These guidelines focus on the validation of the bioanalytical methods generating quantitative concentration data used for pharmacokinetic and toxicokinetic parameter determinations.
Combination of informative biomarkers in small pilot studies and estimation ...LEGATO project
Background:
Biomarker candidates are defined as measurable molecules found in biological media. According to Biomarkers Definitions Working Group, 2001, biomarkers cover a rather wide range of parameters. Recently, biomarkers are used widely in medical researches, where single biomarkers may not possess the desired cause-effect association for disease classification and outcome prediction. Therefore the efforts of the researchers currently is to combine biomarkers. By new technologies like microarrays, next generation sequencing and mass spectrometry, researchers can obtain many biomarker candidates that can exceed tens of thousands. To avoid wasting money and time, it is suggested to control the number of patients strictly. However, pilot studies usually have low statistical power which reduces the chance of detecting a true effect .
Sample size calculation in medical researchKannan Iyanar
A short description on estimation of sample size in health care research. It describes the basic concepts in sample size estimation and various important formulae used for it.
Sample size and how to calculate it
- Why sample size is important
- Alpha and beta errors
- Main outcome and Effect size
- Practical examples using Means-Proportions-Correlation- Confidence Interval
Nowadays Scientists using laboratory animals are under increasing pressure to justify their sample sizes using a ‘‘power analysis’’ to improve study reproducibility and from ethical point of view too.
In this presentation, I review the three methods currently used to determine sample size: ‘‘tradition’’ or ‘‘common sense’’, the ‘‘resource equation’’ and the ‘‘power analysis’’.
Comparing research designs fw 2013 handout versionPat Barlow
This is an updated version of my Comparing Research Designs lecture, which now includes discussions on: (1) common considerations with research design such as bias, reliability, validity, and confounding; and (2) expanded discussion of RCT designs including factorial and cross-over designs.
This slideshow is related to testing of hypothesis and goodness of fit of statistics. This may be useful for students, teachers, managers concerned with bio statistics, bioinformatics, data science, etc.
Research Critique Essay example
Essay on Types Of Research
Three Descriptive Research Methods Essay
What´s Market Research? Essay examples
Hypothesis and Research Question Essay example
English 101 Research Paper
Research Methods Essay
Applied Research Essay example
Example Of Search Strategy
Methodology of Research Essay examples
Experimental Research Designs Essay
Sampling Methods Essay
Qualitative Research Evaluation Essay
Essay on Research Methodology
Ethics in Research Essay
Qualitative Reflection
Example Of Perception Research Paper
The Purpose Of Research Design Essay
Abstract
To address the growing need for information on a therapeutic, besides information on safety and efficacy, and the increasing trend to extrapolate data from traditional randomized control trials (RCT’s) to influence clinical practice; an in-depth evaluation of the utility and practicability of RCT’s in influencing real-world clinical practice is evaluated. The pragmatic clinical trial (PCT) is discussed and introduced as a potentially viable means to influence clinical practice. The regulatory impact of this new adaptation is also explored. Concepts of study design, including concepts such as validity, generalizability, efficacy and effectiveness are discussed for both RCT’s and PCT’s.
This is the handout version of a lecture I give to medical residents and fellows on the basics of clinical research designs and the inherent issues that go along with each one. I give this lecture as part of a multi-module lecture series on research design and statistical analysis.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
2. Important statistical termsImportant statistical terms
Population:Population:
a set which includes alla set which includes all
measurements of interestmeasurements of interest
to the researcherto the researcher
(The collection of(The collection of allall
responses, measurements,responses, measurements, oror
counts that are of interest)counts that are of interest)
Sample:Sample:
A subset of the populationA subset of the population
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 2
3. Why samplingWhy sampling??
Get information about large populationsGet information about large populations
Less costs
Less field time
More accuracy i.e. Can Do A Better Job ofCan Do A Better Job of
Data CollectionData Collection
When it’s impossible to study the whole
population
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 3
4. Target Population:Target Population:
The population to be studied/ to which theThe population to be studied/ to which the
investigator wants to generalize his resultsinvestigator wants to generalize his results
Sampling Unit:Sampling Unit:
smallest unit from which sample can be selectedsmallest unit from which sample can be selected
Sampling frameSampling frame
List of all the sampling units from which sample isList of all the sampling units from which sample is
drawndrawn
Sampling schemeSampling scheme
Method of selecting sampling units from samplingMethod of selecting sampling units from sampling
frameframe
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 4
5. Types of samplingTypes of sampling
Non-probability samples
Probability samples
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 5
6. Non probability samplesNon probability samples
Convenience samples (ease of access)Convenience samples (ease of access)
sample is selected from elements of a population
that are easily accessible
Snowball sampling (friend of friend….etc.)Snowball sampling (friend of friend….etc.)
Purposive sampling (judgemental)Purposive sampling (judgemental)
You chose who you think should be in the
study
Quota sampleQuota sample
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 6
7. Non probability samplesNon probability samples
Probability of being chosen is unknown
Cheaper- but unable to generalise
potential for bias
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 7
8. Probability samplesProbability samples
Random sampling
Each subject has a known probability of
being selected
Allows application of statistical sampling
theory to results to:
Generalise
Test hypotheses
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 8
9. Methods used in probabilityMethods used in probability
samplessamples
Simple random samplingSimple random sampling
Systematic samplingSystematic sampling
Stratified samplingStratified sampling
Multi-stage samplingMulti-stage sampling
Cluster samplingCluster sampling
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 9
12. Sampling fractionSampling fraction
Ratio between sample size and populationRatio between sample size and population
sizesize
Systematic sampling
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 12
14. Cluster samplingCluster sampling
Cluster: a group of sampling units close to each
other i.e. crowding together in the same area or
neighborhood
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 14
18. Systematic error (or bias)
Inaccurate response (information bias)
Selection bias
Sampling error (random error)
Errors in sample
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 18
19. Type 1 errorType 1 error
The probability of finding a difference withThe probability of finding a difference with
our sample compared to population, andour sample compared to population, and
there really isn’t one….there really isn’t one….
Known as theKnown as the αα (or “type 1 error”)(or “type 1 error”)
Usually set at 5% (or 0.05)Usually set at 5% (or 0.05)
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 19
20. Type 2 errorType 2 error
The probability of not finding a differenceThe probability of not finding a difference
that actually exists between our samplethat actually exists between our sample
compared to the population…compared to the population…
Known as the β (or “type 2 error”)Known as the β (or “type 2 error”)
Power is (1- β) and is usually 80%Power is (1- β) and is usually 80%
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 20
21. Types of StudiesTypes of Studies
QualitativeQualitative
•Calculating the proportionCalculating the proportion
•Calculating the difference of proportionsCalculating the difference of proportions
QuantitativeQuantitative
Calculating the meanCalculating the mean
Calculating the difference in meanCalculating the difference in mean
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 21
23. Problem 1Problem 1
A study is to be performed to determine aA study is to be performed to determine a
certain parameter in a community. From acertain parameter in a community. From a
previous study a sd of 46 was obtained.previous study a sd of 46 was obtained.
If a sample error of up to 4 is to beIf a sample error of up to 4 is to be
accepted. How many subjects should beaccepted. How many subjects should be
included in this study at 99% level ofincluded in this study at 99% level of
confidence?confidence?
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 23
25. Problem 2Problem 2
A study is to be done to determine effectA study is to be done to determine effect
of 2 drugs (A and B) on blood glucoseof 2 drugs (A and B) on blood glucose
level. From previous studies using thoselevel. From previous studies using those
drugs, Sd of BGL of 8 and 12 g/dl weredrugs, Sd of BGL of 8 and 12 g/dl were
obtained respectively.obtained respectively.
A significant level of 95% and a power ofA significant level of 95% and a power of
90% is required to detect a mean90% is required to detect a mean
difference between the two groups of 3difference between the two groups of 3
g/dl. How many subjects should be includeg/dl. How many subjects should be include
in each group?in each group?
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 25
27. Problem 3Problem 3
It was desired to estimate proportion ofIt was desired to estimate proportion of
anaemic children in a certain preparatoryanaemic children in a certain preparatory
school. In a similar study at another schoolschool. In a similar study at another school
a proportion of 30 % was detected.a proportion of 30 % was detected.
Compute the minimal sample size requiredCompute the minimal sample size required
at a confidence limit of 95% and acceptingat a confidence limit of 95% and accepting
a difference of up to 4% of the truea difference of up to 4% of the true
population.population.
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 27
29. Problem 4Problem 4
In previous studies, percentage ofIn previous studies, percentage of
hypertensives among Diabetics was 70%hypertensives among Diabetics was 70%
and among non diabetics was 40%and among non diabetics was 40% in ain a
certain community.certain community.
A researcher wants to perform aA researcher wants to perform a
comparative study for hypertensioncomparative study for hypertension
among diabetics and non-diabetics at aamong diabetics and non-diabetics at a
confidence limit 95% and power 80%,confidence limit 95% and power 80%,
What is the minimal sample to be takenWhat is the minimal sample to be taken
from each group with 4% acceptedfrom each group with 4% accepted
difference of true value?difference of true value?
Dr. Keerti Jain, Associate Professor,
GD Goenka University, Gurgaon 29