Why are experts usually so sceptical?

From Testiwiki
Revision as of 13:02, 9 July 2010 by Mikomiko (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The author of the chapter on reproductive toxicology in a textbook wrote that it is seldom possible even to try risk assessment on the basis of data from humans. Why? Isn’t it clear that if a malformed child is born and the mother was exposed to dioxins, that the developmental effect is attributable to dioxins?


There are some very clear cases. In 1958, a new sedative drug, thalidomide, was marketed in Germany. It was claimed to be safer than the barbiturates that had been used before. It was also used by pregnant women for the nausea often encountered during pregnancy. Soon information started to emerge on new kinds of malformations. The children’s clinic in Hamburg had not encountered a single case of phocomelia (stunted upper and/or lower limbs) in all the years from 1949 to 1958. That situation changed dramatically; there was one case in 1959, 30 in 1960 and 154 in 1961. After a feverish investigation, it was found that all the mothers had been using thalidomide. The curves of thalidomide sales and the incidence of phocomelia cases 8 to 9 months later were very similar. Only subsequently was the causal relationship confirmed in animal experiments.

In the case of thalidomide there were three factors that helped to prove the effect by using information from human patients. First, this type of malformation is very rare. If at once there will be hundreds of cases, it is most unlikely that very many environmental factors have changed simultaneously. Second, the likelihood of malformation is very high after thalidomide use, because it is a drug given at fairly large doses, and the dose is rather similar for all patients. Third, there were so many cases, and there was a clear chronological association between individually confirmed use of the drug and the appearance of the malformation. Thus it was relatively easy to collect all this information and prove the association between the drug and the malformation.

The fourth proof required basic studies and experimental animals. In controlled conditions, it is possible to change one single environmental factor while maintaining the rest of factors identical between animals. Several groups of rats and rabbits were treated in an identical manner except that some of the groups were given thalidomide. Only the drug-treated groups suffered malformations, not in any of the other groups. These experiments also provided biological mechanistic information to help to understand why thalidomide does what it does. After this tragedy, all new drugs have been required to go through the animal experimentation screening for their potential to damage the developing foetus.

Ecological fallacy

Another case in Sweden is based on a much more complicated and less straightforward database. Birth weights among the families of Baltic Sea fishermen were found to be 80 g lower than those of North Sea fishermen. The working hypothesis was that the members of Baltic Sea families eat more contaminated Baltic Sea fish. Initially the contaminant concentrations in humans were not measured, so this crucial information was missing and exposure was based simply on location. We do not know what other factors are different between the families, although a few factors were taken into consideration on the basis of a questionnaire study. After this kind of non-experimental or non-controlled study, some sort of confirmation is always needed. So far there has been no definitive proof to support the original hypothesis. Another important point after single studies is that so called statistical significance usually means less than a 5% chance of a false result, i.e. even a chance still has a fairly high possibility of giving a positive result[1] when there is actually no true difference.

A typical problem in a non-controlled population study is the so called ecological fallacy. There are more cases of testis cancer in Denmark than in Finland. Is this due to higher exposure in Denmark to pesticide residues or other environmental chemicals? This is a possibility, but even demonstrating that the exposure in Denmark is higher, is not enough to prove the relationship. It only produces a working hypothesis. The next question is what other differences are there between the Danes and the Finns.

Another difference between the Danes and the Finns is the incidence of smoking by women. An alternative hypothesis then is that smoking during pregnancy could initiate cancer cells in unborn boys, and this would later materialise as testis cancer. This hypothesis fits very nicely with the statistics, because smoking among women was already common in Denmark in the 1940s whereas in Finland it only started in the 1960s and is still lower. This possibility was recently tested in a well-controlled study, but there was no association between testis cancer and maternal smoking (measured as cotinine concentrations in maternal serum during pregnancy giving a reliable measure of smoking). Thus this particular competing hypothesis was quite likely to be false.

There are still many other possible differences between Danes and Finns that have to be screened before the first hypothesis can be given any weight. The important point is that one should never confuse a working hypothesis and a proven fact.

Controlled studies

Even a well-performed uncontrolled study takes us only halfway to the truth. A controlled intervention study may reveal very surprising findings. Intervention is somewhat like an animal study. The population is divided randomly to similar groups, and one group is exposed to the substance or some other factor to be studied, the control group is not. Then we may believe that the only factor different between the groups is the substance being studied. Moreover a controlled study may often be performed double-blind. The studied substance is put into capsules or tablets, and a similar fake or placebo capsule or tablet is provided for the control group. All this is done by a third party coding the preparations with a code that is not revealed to either the researchers or the study subjects to avoid any psychological impact of knowing that they are being treated (or not). In this way, preconceived expectations cannot influence the result of the trial.

A well-known intervention is the beta-carotene study conducted in Finland. Nine thousand smoking men were divided into similar groups. Some were given beta-carotene, some received placebo. Against all expectations and differing from the results of several uncontrolled studies, beta-carotene did not prevent lung cancer at all. The supposition had been based on the antioxidant properties of beta-carotene.

Such findings make the researchers very cautious. The number of pairs of brooding storks associates nicely with the number of babies born in Germany during the same years, but this alone does not prove cause and effect. It is more difficult to perform controlled studies on environmental chemicals than on drugs or vitamins. This is one of the reasons why knowledge about these topics is often less reliable and based only on animal experiments.

Does it matter if we ban a chemical without any real reason? This depends on the chemical. If the chemical is only applying for the marketing licence, rejection probably would not cause any major problems. But what if the chemical is a natural chemical in the environment? Would it be reasonable to exclude the use of many ground water reservoirs simply because they contain tiny amounts of arsenic, fluoride, uranium, nitrates or aluminium? If we apply a ban, what would be alternative source of obtaining drinking water? If it is surface water, what about disinfection side products? Likewise, if the production of chlorinated persistent organic pollutants has been banned, but these compounds still linger in fish, should we ban also fish consumption? However, if fish consumption is beneficial for health, what would be as good a nutritional alternative?

Everybody would be happy if the world were simple. Unfortunately, it is not and never will be. Therefore researchers should remain sceptical on the possibility that simple and straightforward answers can be found to complicated questions.

Proving cause and effect relationship is not simple, and often wrong information is worse than no information at all.

Notes and references

  1. A false positive result means a result suggesting that something is true when in fact it is not. Reasons may include just chance or a measuring error. A false negative result means that the study is not able to detect the true result. The reason is often a too insensitive method (e.g too few individuals in the study to exclude the possibility of chance).

One level up: Here a risk, there a risk, everywhere risks, risks!

Previous chapter: What are the most common misconceptions about risks held by the man in the street?

Next chapter: Why do different people rate different risks so differently?