Topics Related to Prescription Drugs, Anxiety
and Depression
   Home      The Pharmaceutical Industry      Drug Study Problems II

First, let’s clear up the difference between two types of drug studies: epidemiological (in particular,  retrospective cohort ) studies and clinical studies.  In an retrospective study, the researchers look back on what has happened in a population and try to guess why it happened.  An example would be looking back on the history of smokers and non-smokers and finding that the smokers had a higher incidence of heart disease.  Here comes the tricky part: did the smoking cause the heart disease, or is it perhaps some else about people that  smoke that also puts them at a higher risk of heart disease?  Maybe people who smoke take, on average,  poorer care of their health in other ways-less exercise, lots of fatty food, and infrequent checks of blood pressure, triglycerides, cholesterol, C-reactive protein, vitamin D, etc.  So if we really knew what  was going on, maybe the smoking did not have any direct effect on the heart, it was just that the type of person who smokes is  more likely to have heart problems.  I hope you  are shaking your head in disagreement-but the argument still holds: a study of this type (epidemiological) does not prove cause, it simply infers cause.

To summarize, there are just too many variables to account for in a  retrospective epidemiological study to make it conclusive.  The next step in trying to ascertain causality is the clinical study: in this case, two well-matched populations ( in this drug study discussion, one population or “group” would be given the active drug and the other group would be given a placebo).  So here is the "smoking-causes-heart-disease" clinical study that would be very convincing: take 20,000 eighteen-year old males and females who have never smoked and then divide them into two 10,000-member groups well matched for general health.  One group is instructed to start smoking exactly one pack of a certain brand of cigarettes a day and keep  smoking for the rest of their lives, and the other group is instructed never to smoke, for the rest of their lives.

We now monitor both groups for sixty-plus years and see what happens in terms of heart health.  Simple, right?  But how many researchers want to wait sixty-plus years for results of a study?  We also didn’t put in the complicating factors-such as some of the non-smokers will decide to smoke anyway, and some of the smokers will quit, despite their initial acceptance of the plan.  We will still have to monitor all other variables associated with heart disease for each individual during the study.  I proposed this example because I think it is easier to see the drug study difficulties involved than if had  chosen a drug clinical study-although the principles are the same.  Let’s summarize where we are: retrospective studies don’t pinpoint cause, the best ones just infer a cause that needs to be confirmed by a carefully-done clinical study.  Clinical studies can do better job of inferring cause, but they can be very cumbersome.  Example: epidemiological studies in the past suggested that an increased intake of selenium would decrease the risk of prostate cancer in males.  Then a clinical study was done and selenium was shown to have no protective effect.

In a meta-analysis (which is of it nature retrospective), the results of several selected studies are combined with the reasoning that the results of several, independent studies is more likely to be reliable than the results of a single study.  Meta-analysis requires simply analysis of data: no new studies are performed.  Such an analysis requires the selection of which studies are to be included-that is, the authors of the studies must select the studies that they feel were performed most rigorously from a scientific point of view.  This requires the authors to define "most scientific" studies and therefore adds another layer of assumptions to a retrospective study-and often, the studies finally selected for analysis is small percentage of those initially  under consideration.  For a particularly clear explanation of the problems this kind of analysis can create, the reader is referred to [1].

So we can add the type of drug study to our list of “things to consider” when we read results of a drug study-from a previous page we learned to ask “was this an animal study or a human study” and “what was the percent difference between the placebo and active drug group response” if the study was a clinical one.  All studies for FDA approval are clinical studies, but the difference between epidemiological and clinical study may help explain the results of other, non-regulated drugs, like St. John’s wort.  It’s not hard to find “St. John’s Wort Shown to be as Effective as Prescription Antidepressants” and “St. John’s Wort Shown to be no Better Than Placebo” article in the same science  digest.  Moral of the story: combining epidemiological and clinical drug study conclusions often results, at best,  in confusion.








 Content on this Web site  is for informational purposes only. We do not provide any medical advice, diagnosis or treatment