494 tympanic body temperatures recorded on ED admission
207 RSI (anaesthetised)
287 Non-RSI (non-anaesthetised)
Tympanic thermometers calibrated every 3 months
SPSS V16.0, normality assumed, Student’s t test or one way ANOVA for numerical data (Kruskal-Wallis used for NP data); χ 2 test for proportions; α =0.05
“For patients with a temperature recorded (n=494), a higher percentage of patients with CNS injury (AIS for head ≥3) was found for the RSI group than for the non-RSI group.” “The percentage of patients with ISS>15 was higher for the RSI group than for the non-RSI group (n=1292?).”
were more likely to have had a serious head injury,
were more likely to have sustained a serious or critical injury,
when compared to their non-RSI counterparts
All cases in which body temp was recorded (n=474) Q3 Median (Q2) Q1 More than 3xIQR From Q1 Maximum value That lies within 1.5xIQR from Q3 Minimum value That lies within 1.5xIQR from Q1 Between 1.5 and 3xIQR from Q1
The measurements we make while doing research are nearly always subject to random variation. Determining whether findings are due to chance is a key feature of statistical analysis. The best way to avoid error due to random variation is to ensure your sample size is adequate.
Could the findings of this study have been affected by chance?
Whereas chance is caused by random variation, bias is caused by systematic variation. A systematic error in the way we select our patients, measure our outcomes, or analyse our data will lead to results that are inaccurate. There are numerous types of bias that may affect a study.
The selection of subjects into your sample or their allocation to treatment group produces a sample that is not representative of the population, or treatment groups that are systematically different. Random selection and random allocation are the keys to avoiding this bias.
Could the findings of this study have been affected by selection bias?
Measurement of outcomes is inaccurate. This may be due to inaccuracy in the measurement instrument or bias in the expectations of study participants, carers or researchers. The latter may be addressed by blinding participants, carers or researchers.
Could the findings of this study have been affected by measurement bias?
A classic example of confounding is to interpret the finding that people who carry matches are more likely to develop lung cancer as evidence of an association between carrying matches and lung cancer. Smoking is a confounder in this relationship - smokers are more likely to carry matches and they are also more likely to develop lung cancer.
Dealing with confounding is relatively easy if you know what the likely confounders are. You could stratify your results - i.e. in our previous example you could analyse smokers and non-smokers separately, or you could use statistical techniques to adjust for confounding.
Could the findings of this study have been affected by known confounders?
Dealing with unknown confounders is obviously much trickier. There is always a risk that an apparent association between a risk factor, or an intervention, and an outcome is being mediated by an unknown confounder. This is particularly true of observational studies where patients may be selected to one treatment group or another, not according to any explicit criteria, but by some unknown process, such as a care providers 'gut feeling'. The best defence against unknown confounders is randomisation. This ensures that both known and unknown confounders are randomly distributed between treatment groups.
Could the findings of this study been affected by unknown confounders?