In 1991, Cook et al reported two meta-analyses on the utility of various stress ulcer prophylactic regimens in critically ill patients. That same year, Tryba reported another meta-analysis of similar drugs. These two groups agreed in some aspects of their studies, but disagreed in others. In order to try to resolve the discrepancies between their findings, and to update their studies, the authors pooled resources and performed a new meta-analysis, the results of which are detailed here.
The authors discuss the main reasons their initial meta-analyses reached some different conclusions. The main reasons for differences between these studies are: different agents included (Tryba included pirenzipine and prostaglandins, Cook did not); some different studies included in the meta-analyses; different definitions of bleeding; different analytic approaches when performing the meta-analyses.
The authors then give a brief summary of how their initial studies agreed and disagreed, and compared them to the current study.
When comparing the results of the prior meta-analyses, in some cases the results were the same; in some cases a statistically significant result in one study was a trend that did not reach significance in the other; in some cases a significant result in one study was a trend in the opposite direction in the other study. In no cases, however, were diametrically opposite statistically significant results reported. This is somewhat reassuring, since meta-analyses should, ideally, resolve conflicts in the literature, not create new ones.
Although meta-analyses are very well suited to "transforming" trends in a number of studies into a statistically significant result, I'm not sure that a trend in a meta-analysis itself is any more meaningful than a trend in an individual study. In this paper, there is much mentioning of trends, and even a rigorously defined distinction between trends and strong trends. I believe there is a risk that a trend in a meta-analysis will be accorded more weight than a similar trend in an individual study, which is not warranted. A trend is not statistically significant, whether in a trial or in a meta-analysis. I did not include any of the trends reported in this paper in my summary.
This area of literature review is quite confusing to me (your comment about trending not being significant and should have no weight in either multi-study analysis or in the individual study).
For those of us who are not as steeped in statistical analysis, could you expand this comment/concept? Intuitively, it would seem that if several studies showed trends that this should have a greater weight in trusting the data to be something other than chance...Is the point that you are making that a tendency that does not reach significance is not to be trusted?
OU/Enid Family Medicine
You are absolutely right that if several studies show a trend in a certain direction, by combining these studies through meta-analysis it is often possible to demonstrate that the trend was, in fact, more than just a trend and actually reaches statistical significance. This is the main rationale behind meta-analysis.
My point is that if, after combining the study populations and pooling the results through meta-analysis, one is left with a trend that still does not reach statistical significance, then the outcome is still only a trend. Of course, several studies with a trend, when combined by meta-analysis, may well produce a "stronger" trend (but not necessarily so). It's still just a trend and no more significant for merely being the result of a meta-analysis.
If you believe trends in studies should be reported, particularly if they nearly reach statistical significance (that very arbitrary concept), then it's OK to cite them in a meta-analysis. I'm just saying that a trend in a meta-analysis isn't magically more valid than an equally strong trend in an individual study.
Sorry if that wasn't clearly put in my original comment. -- mj
Date: Fri, 22 Mar 1996
Subject: meta-analysis of Stress Ulcer Prophylaxis
I think it is critical to point out that meta-analyses are nothing more than statistical "toys". For example, consider the recent meta-analysis by Furberg et al, Circulation. The same issue contains a rebuttal meta-analysis of the same studies, but with a non-significant result. This was accomplished by a minor adjustment of the data used for the meta-analysis.
Clinicians should pay more heed to well-performed studies that have significant results. If two such studies exist that conflict, then one should pick the study that applies most to one's own practice and experience. Meta-analyses, once the studies to be used have been chosen, do not give any more weight to the better studies, beyond that of number of patients studied. There are a lot of mediocre studies that receive equal weight with landmark studies.
The study by Furberg et al (Circulation, September 1, 1995) was a meta-analysis looking at nifedipine in patients with coronary disease (mainly post-MI patients), and found a dose-related increase in mortality. In the same issue, Opie and Messerli dispute these results. They state that the analysis of one of the studies should have looked at 6-month rather than 2-week mortality and that one of the studies should not have been included. Under these circumstances, the results would not have been significant.
As you also point out, there is, as a rule, no weighting of results by study quality, although the meta-analysis reviewed here did attempt to at least quantify the "quality" of the studies.
There are two more objections one could raise to the whole concept of meta-analysis. One of them, mentioned by the authors of this study, relates to publication bias. There is a very strong bias towards the publication of "significant" results and against publication of negative results. This bias is magnified by the process of meta-analysis.
Another question relates to the clinical significance of results. If it takes a meta-analysis, a pooling of large studies, to show statistical significance, one wonders how important those results could be in the "real-world".
Despite the above, meta-analysis is not necessarily a useless exercise. For one thing, using meta-analysis it is possible to get a better, more accurate assessment of the quantitative magnitude of an effect. If five trials show a significant reduction in mortality from a therapy, the magnitude of that reduction can be estimated with greater accuracy by pooling the results (assuming a homogeneous population, which is supposed to be a prerequisite for performing a meta-analysis anyway).
Another benefit of meta-analysis is in uncovering an effect that usually takes years to develop. For example, if a therapy is carcinogenic, it might take years for that effect to be readily detectable; over a shorter period of time, carcinogenicity will probably be only minimally apparent. A meta-analysis, by greatly increasing the power to detect small changes, can help uncover such an effect earlier on.
Finally, meta-analyses often function as excellent reviews of the studies that have been published on a given topic. I find the diagrams that show risk reduction (or other parameter) and confidence intervals for multiple studies to be particularly helpful.
I agree, however, that meta-analyses can sometimes fall into the category of "lies, damn lies and statistics" and cannot be automatically taken for gospel truth. They need to be reviewed just as critically as any study or trial. -- mj
Subject: Metaanalysis and stress ulceration
Date: Mon, 22 Apr 1996
From: Peter Ellis <email@example.com>
I would like to make an additional comment regarding metaanalysis. Metaanalysis, like any randomised controlled trial, can be subject to bias if it is not well performed. It is therefore just as important to critically appraise an article on metaanalysis as any other published research.
Several authors have published guidelines for appraisal of overviews and metaanalysis.Two that I find useful are 'Checklists for review articles. Oxman A. BMJ 1994:309;648-651' and 'Users guide to the medical literature VI: How to use an overview. Oxman A, Cook DJ and Guyatt GH. JAMA 1994:272(17);1367-1371'.
The results of a metaanalysis should not routinely be accepted just as the results of an RCT should not be. Rather they should be critically appraised. Other articles in the JAMA series, 'Users guide to the medical literature may also prove useful for this purpose. Alternatively this site can be located at
http://hiru.hirunet.mcmaster.ca/ebm/userguid/default.htm [note: this site is no longer active; try: http://www.cche.net/usersguides/main.asp -- mj]
Peter Ellis MBBS FRACP
Dept Cancer Medicine, Sydney University, Australia
Journal Club on the Web home page
Submit a comment about this article