Sample size calculation in medical researchKannan Iyanar
A short description on estimation of sample size in health care research. It describes the basic concepts in sample size estimation and various important formulae used for it.
Sample size calculation in medical researchKannan Iyanar
A short description on estimation of sample size in health care research. It describes the basic concepts in sample size estimation and various important formulae used for it.
Categorical Data and Statistical AnalysisMichael770443
In this presentation, we will introduce two tests and hypothesis testing based on it, and different non-parametric methods such as the Kolmogorov-Smirnov test, the Wilcoxon’s signed-rank test, the Mann-Whitney U test, and the Kruskal-Wallis test.
An introduction on how to go about a meta-analysis. Primarily designed for people with non statistical background. Heavily borrows from Cochrane Handbook of Systematic Reviews of Interventions.
The STUDY of the DISTRIBUTION & DETERMINANTS of HEALTH-RELATED STATES in specified POPULATIONS, and the application of this study to CONTROL of health problems.
This course is designed to give students an overview of research versus biostatistics, Stata, test of association, comparisons of means, Correlation and regression, Generalized Linear Models (GLM).
The sessions are designed to introduce the denitions and basic
concepts of biostatistics, statistical inference, t-test, ANOVA, Correlation, Linear regression, logistic regression, Poisson regression, Negative binomial regression, and Zero in
ated poisson regression.
The overall emphasis will be placed on understanding the language of statistics and the art of statistical investigation.
This slideshow provides a brief introduction to the concepts of epidemiology, the key historical figures and events that played a role in the evolution of epidemiology and finally an overview of key epidemiological study designs.
This is a talk I gave during the third year of my Residency in Internal Medicine at the University of Cincinnati. It goes over the history and evolution of statistical concepts underlying Medical Science and Evidence Based Medicine
A nice summary (from which most of the material after Laplace's time came from) is given in:
http://www.worldscibooks.com/etextbook/4854/4854_chap1.pdf
Categorical Data and Statistical AnalysisMichael770443
In this presentation, we will introduce two tests and hypothesis testing based on it, and different non-parametric methods such as the Kolmogorov-Smirnov test, the Wilcoxon’s signed-rank test, the Mann-Whitney U test, and the Kruskal-Wallis test.
An introduction on how to go about a meta-analysis. Primarily designed for people with non statistical background. Heavily borrows from Cochrane Handbook of Systematic Reviews of Interventions.
The STUDY of the DISTRIBUTION & DETERMINANTS of HEALTH-RELATED STATES in specified POPULATIONS, and the application of this study to CONTROL of health problems.
This course is designed to give students an overview of research versus biostatistics, Stata, test of association, comparisons of means, Correlation and regression, Generalized Linear Models (GLM).
The sessions are designed to introduce the denitions and basic
concepts of biostatistics, statistical inference, t-test, ANOVA, Correlation, Linear regression, logistic regression, Poisson regression, Negative binomial regression, and Zero in
ated poisson regression.
The overall emphasis will be placed on understanding the language of statistics and the art of statistical investigation.
This slideshow provides a brief introduction to the concepts of epidemiology, the key historical figures and events that played a role in the evolution of epidemiology and finally an overview of key epidemiological study designs.
This is a talk I gave during the third year of my Residency in Internal Medicine at the University of Cincinnati. It goes over the history and evolution of statistical concepts underlying Medical Science and Evidence Based Medicine
A nice summary (from which most of the material after Laplace's time came from) is given in:
http://www.worldscibooks.com/etextbook/4854/4854_chap1.pdf
Data inaccuracies were identified and then classified
as either clinically significant or not significant.
Data inaccuracies were observed in 53.33% of articles
ranging from 3.33% to 45% based on the IMRAD format
sections. The Results section showed the highest discrepancies
(45%) although these were deemed to be mostly
not significant clinically except in one. The two most
common discrepancies were mismatched numbers or
percentages (11.67%) and numerical data or calculations
found in structured abstracts but not mentioned in the
full text (40%). There was no significant relationship
between journals and the presence of discrepancies
(Fisher’s exact p value =0.3405). Although we found a
high percentage of inaccuracy between structured
abstracts and full-text articles, these were not significant
clinically.
Biostatistics in clinical research involves the application of statistical methods to analyze and interpret data from clinical trials. It plays a crucial role in study design, sample size determination, data analysis, and result interpretation. Biostatisticians ensure that clinical research findings are valid, reliable, and meaningful, contributing to evidence-based medicine. Their expertise helps researchers make informed decisions, assess treatment efficacy, and draw accurate conclusions about the safety and effectiveness of interventions.
Similar to Common statistical errors in medical publications (20)
Presentation by Dr Steve McEachern, ADA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Hugo Leroux and Liming Zhu, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Kelly Hart, ONDC in PM&C, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Prof Chris Rowe, ADNet, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Investigator-initiated clinical trials: a community perspectiveARDC
Presentation by Miranda Cumpston, ACTA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Merran Smith, PHRN, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
International perspective for sharing publicly funded medical research dataARDC
Presentation by Olivier Salvado, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Prof Lisa Askie, ANZCTR, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Davina Ghersi, NHMRC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Adrian Burton, ARDC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
FAIR for the future: embracing all things dataARDC
FAIR for the future: embracing all things data - Natasha Simons, Keith Russell and Liz Stokes, presented at Taylor & Francis Scholarly Summits in Sydney 11 Feb 2019 and Melbourne 14 Feb 2019.
How to make your data count webinar, 26 Nov 2018ARDC
Slides from the 26 Nov Make your data count webinar. The research community has long grappled with the problem of assessing and tracking the results of scholarship. Research data is no exception. The Make Data Count (MDC) project (https://makedatacount.org/), funded by the Sloan Foundation, has delivered a data usage metric standard (Code of Practice) and a workflow for the retrieval and display of standardised usage and citation metrics in your repository interface.
Listen to this webinar to learn more about the Make Data Count project and the 5 steps you can take to make the data in your repository count. Hear from MDC project team members who have already implemented MDC in the dash (https://dash.ucop.edu) and DataOne (https://search.dataone.org/data) repositories. Learn from their experience, see the results.
Our international speaker line-up includes Daniella Lowenberg (California Digital Library) and Patricia Cruse (DataCite).
Recording available: https://youtu.be/Lkysz0Mc7fo
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
2. • Statistical errors in medical journals are surprisingly
common. For example:
• Olsen (2003) found that 54% of a sample of 141 papers
published in Infection and Immunity had errors in
reporting, analysis or both.
• Yim et al. (2010) found 79% of a sample of 139 papers
published in the Korean Journal of Pain had errors.
• Nieuwenhuis et al (2011) found that 15% of articles
reviewed in the top ranking journals Science, Nature,
Nature Neuroscience, Neuron and The Journal of
Neuroscience had used the wrong method.
Errors are surprisingly common..
3. Types of errors
Errors can be broadly classified into three main areas
• Errors in design
• Errors in analysis
• Reporting and interpretational errors
Various publications describe these problems:
Eg. Clark (2011), Lang (2004), Olsen (2003), Strasak et al. (2007)
4. Common Design Errors:
• Lack of a sample size calculation (or wrong calculation)
• Studies with too few subjects are underpowered – a difference won’t be found even if
a real difference exits (Altman and Bland, 1995)
Sung et al. (1993)
Results
6. More Design Errors:
• Primary outcome measures unclear
• Randomisation method unclear
• Hypotheses unclear
• An a priori analysis plan should be made
so that it’s clear that the research isn’t
the result of a “fishing expedition”
7. Errors in analysis
• Testing for equality of baseline characteristics
in RCTs
• Potentially misleading, not meaningful, not
needed.
Yang et al. (2017)
19 comparisons!
Expect 5% (~1)
to be spuriously
significant
8. More on analysis errors:
• Use of the wrong test eg
• Two-sample t-test (for independent groups) used where a paired t-test (for dependent groups)
should have been and vice versa
• Parametric methods used where non-parametric should have been used (i.e. in skewed data,
small samples)
• Methods not appropriate for data type eg linear regression used with ordinal response.
• Failure to adjust p-values for multiple testing (to avoid Type I errors)
• Failure to carefully define all of the tests used in the methods section
9. And more on analysis errors:
• In RCTs comparisons within groups but not between groups tests are performed (or are
ignored)
• Watson et al (2009) compared an anti-ageing product (n=30) with a placebo (“vehicle”) (n=30).
• They found the test product showed significant improvement in facial wrinkles compared to
baseline assessment (P = 0·013), with no significant improvement given by the vehicle (P = 0·11).
• But, there was no significant difference between test and vehicle (P=0·72).
• Media suggested this was the first anti-ageing cream “proven to work.”
• But the treatment vs placebo comparison is what matters – this is the only comparison that shows
that the treatment works (or not)!
See Bland and
Altman (2011) for
a useful discussion
on this paper!
10. And more on analysis:
• Continuous data made binary or into ordinal categories (or ordinal
categories made binary) without justification
• May be done to “find”/increase significance
• Typically a great loss of information results from dichotomisation
• Failure to show/comment on assumptions required for testing.
11. Errors/Deficiencies in Reporting Statistics
• Failure to use (or define the use of) a variability measure (eg. SD)
• Use of mean and standard deviation (SD) in skewed data
• median and quartiles are preferable
• Using standard error (SE) of the mean instead of SD in descriptive
statistics or confusing the two
• SE used because it is smaller so “looks” better
• Reporting thresholds for p-values rather than the actual p-values
• Reporting p-value but no data (i.e. estimate and interval, change
and interval etc) – like the anti-ageing cream study
• Reporting significance of a test or analysis not shown or described
12. Errors in conclusions
• Correlation
is not
causation!
• Make sure
that
conclusions
don’t
suggest
causation
13. Errors in conclusions
• Conclusions are drawn that are not supported by results
• Interpreting “not significant” as “not different” or “equivalent”
Yang et al., 2017
Sung et al., 1993
15. References
Useful summaries of errors
Altman DG, Bland JM. Statistics notes: Absence of evidence is not evidence of absence. BMJ 1995; 311 :485
Bland JM, Altman DG. Comparisons against baseline within randomised groups are often used and can be highly misleading. Trials 2011,
12:264
Clark GT, Mulligan R. Fifteen common mistakes encountered in clinical research. Journal of Prosthodontic Research 2011; 55:1-6
Lang T. Twenty Statistical Errors Even YOU Can Find in Biomedical Research Articles. Croatian medical journal 2004; 45(4): 361-370
Nieuwenhuis S et al. Erroneous analyses of interactions in neuroscience: a problem of significance. Nature Neuroscience 2011; 14: 1105-
1107
Olsen CH. Guest commentary: Review of the Use of Statistics in Infection and Immunity. Infection And Immunity 2003;71(12): 6689–6692
Strasak, AM et al. Statistical errors in medical research – a review of common pitfalls. Swiss Medical Weekly 2007; 137: 44-49
Yim KH et al. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain. Korean Journal of Pain 2010;
23: 35-41
16. References
Examples
Sung et al. Octreotide infusion or emergency sclerotherapy for variceal haemorrhage. Lancet 1993; 342: 637-41
Watson REB, et al. A cosmetic ‘anti-ageing’ product improves photoaged skin: a double-blind, randomized controlled trial. Br J Dermatol
2009; 161:419-426.
Yang et al. Finding the Optimal volume and intensity of Resistance Training Exercise for Type 2 Diabetes: The FORTE Study, a Randomized
Trial. Diabetes Research and Clinical Practice 2017; 130: 98-107.
http://www.tylervigen.com/spurious-correlations (correlation plots)
Thanks to Deb Wyatt, Michael Martin and the MedStats Google Group users for some great examples and references.
17. Additional thoughts
• Several of you asked me how you could get in contact with
statisticians to include as reviewers for papers or on your editorial
boards. There are several approaches that can be taken:
1. Email the anzstat mailing list
(http://www.maths.uq.edu.au/research/research_centres/anzstat/). This
is a list for people interested in statistics. Because you could identify
people at any stage of their career or non-statisticians it would be
important to ask for a CV and maybe check references.
2. Approach university department heads in stats/maths/biostatistics and
ask for recommendations on people to invite.
3. I plan to talk to the Statistical Society of Australia about putting together
a registry of statisticians willing to help.