One way to distinguish random errors from systematic errors is to imagine what would happen if a study would be increased in size. In such a situation random errors will be reduced, but the systematic errors will remain. Suppose you would like to know the average height of the 200 haemodialysis patients in your centre. In this particular case there will be different sources of error affecting your estimate. The measuring tape may give different readings depending on the way it was held, how it was read, the time of the day and who took the measurement. If the way the measuring tape was held differently was random, then the resulting errors will be random, i.e. sometimes result in a reading that is too high and sometimes in a reading that is too low. On average, such readings will not tend to be much too high or much too low and the effect of the errors will become smaller as you include more patients in your study. Then the difference between your average measured height and the average real height of your patients will be close to zero. Random errors will therefore be reduced with increasing study size.
The situation is quite different in case of systematic error. Suppose your measuring tape has been stretched, because it has been used for many years and nobody has bought a new one. Then the height of each patient will be underestimated systematically and this systematic error cannot be reduced by increasing the number of measurements. The average measured height will be biased. Epidemiologic studies are almost always to some extent subject to bias. In future newsletters we will discuss a number of the many types of bias, e.g. selection bias, information bias and confounding bias.

For further reading
1. Rothman K. Epidemiology: an introduction. Oxford University Press, 2002
2. D Coggon, G Rose and DJP Barker. Epidemiology for the uninitiated. BMJ Publishing Group 2003

Kitty Jager
Managing Director of the ERA-EDTA Registry