Media & news

Statistics can be deceiving

by: Christina Bergström

Be forewarned: Everyone has an axe to grind, a point to prove, or a product to sell.

No matter where we turn today, we find statistics all around us.

In this classic, yet still up-to-date Nobelpharma News report, one of Ulf Lekholm’s co-authors, and a statistical expert in her own right, explains how figures are used to tell a story—or sometimes to exaggerate one. Depending on how they are presented, statistics can be reliable or misleading.

Statistics make it possible to organize material for systematic analysis and can be used to present findings as clear, objective figures instead of vague, subjective words (the most common of them being: “a few” and “usually”). There is quite a difference between saying: “He usually does well at competitions” and “Three times out of 10, he has done well at competitions.” “Usually” means different things to different people. Statistics serve well to define the relationship between any two variables.

Presentations of results and conclusions are often based on statistical analyses. Their reliability depends not only on the quality and the manner of presentation, but their interpretation by the reader as well. Interpretation varies depending on the background of the reader and how the material is presented.

It is crucial for the reader to be critical. Many questions need to be asked. A few examples: Is it reasonable to draw the conclusions being presented from the figures available? Is it reasonable to talk about 5-year results when only 9 of 3000 subjects in a study have attended the 5-year checkup? What is the objective of the report? Who may profit from the results?

Diagrams can be used when presenting material. They can provide the reader, i.e. the interpreter, with both numerical and visual information, but it is important to understand that such visual information can easily be manipulated by simple modifications of the diagram.

Cut-offs exaggerate

A diagram axis can be “cut off ” to produce more dramatic differences.

Proportions are lost with a “cutoff ”, that is, the correct proportions of the data cannot be seen. Instead of giving each single unit equal space, only a specific part of the table has been presented in figure 1. If a cut-off of the axis is necessary to make a point, the cut-off should be clearly marked to aid in interpretation.

Optical illusions

By using broader bars in a bar graph, the differences between different bars will appear less dramatic.

Another way of changing the visual impression of a bar chart is to change the axes so the frequency is represented along the horizontal axis (x-axis) instead of the vertical axis (y-axis) as can be seen in figure 3. The vertical axis should, whenever possible, be the frequency axis because a horizontal frequency axis makes the differences in the chart appear to be smaller.

Tables instead of graphs?

Data can also be presented in tables. Good tables must be easy to read and easy to interpret; otherwise the reader will easily lose interest. Some tables are so detailed that it is impossible to determine which are the important parts of the table. Identifying the columns in the table by letters or abbreviations can also make it more difficult to interpret.

When comparing different groups within a population or when comparing different populations, the manner in which the groups/populations have been selected should be described as well as their respective size(s).

For example, the number of failed fixtures is of interest only if you know the total number of fixtures in the specific groups/population. When comparing different groups, it is necessary to mention both the total number and the number of the compared sub-groups.


A lifetable includes both actual and relative numbers (results) and can be appropriately used to present the results of a long-term follow-up study (see figure 4), and the long- Statistics can be Deceiving Be forewarned: Everyone has an axe to grind, a point to prove, or a product to sell. term final results can be forecast with a lifetable even if the study is not yet completed. Nonetheless, it is important to be aware that more than 75 percent of the initial group must remain in the study (including any failed fixtures) to draw any reliable conclusions. The reader will find it difficult to interpret the results if only one part of the lifetable is presented, especially if only the cumulative success rates (CSR) are presented.

Lasting impressions To give the reader an impression of dramatically decreasing numbers of failed fixtures, the author can choose to show only the numbers of failed fixtures during successive time periods (see figure 5A). This obscures for the reader the fact that the number of controlled implants during the periods in question has decreased successively at the same time (see figure 5B) and that the success rate figures hardly vary at all from one time period to the next (see figure 5C).

In order to give the reader an overall picture when presenting the results of a comparison, it is important to carefully state what actually has been compared. The size of the comparative groups should be stated; not the results of the comparison alone. The objective of the investigation and the method of investigation ought to be declared. It is also important to spell out the plan of investigation for the reader.

The loss of subjects due to dropout, infirmity or death is a difficulty faced in most clinical trials. The effects can be far-reaching. Suppose the objective of a study is to resolve the proportion of 1000 implanted fixtures that survive 5 years after the operation. If data is only collected on 500 of the fixtures and 50 of the 500 have failed, the failure rate may appear to be 10 percent but, if the other 500 have failed, it means that 55 percent is a more reasonable figure. On the other hand, if the 500 uncontrolled fixtures are all survivals it means that only 5 percent have actually failed.

In some cases, figures must be rounded off. If the result is presented with figures that show “great precision” (i.e. use many decimals) it looks as if the result is much more exact than the size of the sample may justify.

An article can look trustworthy if numerous statistical analyses are used to demonstrate the results. This gives the reader the impression that the study encompasses well-controlled material and does not need to be brought into question.

Nonetheless, statistical analyses can be deceptive. It is quite possible that the reader is not familiar with the methodology used and that the analyses therefore do not provide any comprehensible information. If that is the case, then the statistical analyses used are of little value. Once again, it is the duty of the author to make sure that the article can be understood by the vast majority of those for whom it has been written.


Many statistical analyses are carried out to examine the relation between two variables (within a single group of subjects) in order to assess whether or not the two variables are associated. One may conclude a relation ship between two variables if one variable varies simultaneously with another.

This is called a correlation; it implies no cause/effect relationship. Although there may be a cause/effect relationship, even a third variable could be the cause of both of the observed changes.

When reading articles that include statistical analyses in medical magazines or journals, in newspapers or the popular press—just to name a few examples—the reader has to be critical and look for the figures that underlie the values under study. If those figures cannot be found, it is possible that the writer has consciously chosen to present his or her results in a vague manner. Be skeptical when you read and you’ll get more out of the materials you find interesting.

More to explore:

Looking to add to your skill set? Check out Nobel Biocare's global course catalog to find a training program near you.