We all have a 100% absolute risk of dying from something. We also have a much smaller absolute risk of developing cancer, having a heart attack, etc.  Although small, these absolute risks accumulate as we age and often accelerate in old age.  This would explain why few children and many more older adults have heart attacks for example.  Knowing your absolute risk is important.  The trick is finding absolute risk information from the literature.

Most studies don’t bother mentioning the absolute risk or they (rightly) point out that absolute risk changes over time and circumstances.  That said, there really should be a database that gives absolute risk for any member of a population dropping dead from whatever your morbid curiosity has led you to try to look up.  Maybe it’s out there and I just haven’t found it.

In the mean time you will encounter many studies that talk about RELATIVE RISK, so let’s look at an example of how the same risk data can be stated as both absolute and relative.  NB – relative risk is also called hazard ratio (HR), relative risk reduction (RRR), or sometimes just risk reduction (RR) and while some of these involve calculations using different statistical models, for our purposes they can be treated as the same thing.

Using numbers from a 2006 New York Times article on lung cancer we see that there are 180,000 new cases in the United States each year. Of those, 20% or 36,000 are non-smokers. Because we can’t point to any risky behaviors in the non-smoker group, we shall assume that these 36,000 individuals are simply unlucky. They had the same chance of experiencing an unfortunate combination of genetics and environmental factors as any other non-smoker.  The percentage of the population represented by these 36,000 is a pretty good number for absolute risk.  In hard numbers 36,000 divided by the population of the United States in 2006 (298.4 million) gives us an absolute risk IN ANY GIVEN YEAR of .000121%.

Let’s stop for a moment to consider the fact that we do not know from the data the average age of new lung cancer patients.  If the average age is 60 then we could say with some confidence that the absolute risk of developing lung cancer at age 60 is .000121%.  We could also say with pretty good confidence that the risk is less if you are younger and greater if you are older than 60.  With me so far?

Now let’s look at the smokers in the data from 2006.  If you are a smoker your ABSOLUTE RISK is .000483% in any given year.  Also tiny, but more worrisome if we compare this absolute risk number to the baseline absolute risk number from the non-smoker group.  This is the RELATIVE RISK and it shows that smoking is associated with a 400% increased risk of developing lung cancer in any given year.

One way to make the data digestible is to describe it in terms of incidence rate.  In the example above the incidence rate would be roughly 60 new lung cancer cases for every 100,000 people in the U.S. or only 12 non-smokers out of every 100,000 in the general population.

In an attempt to make all of this simpler the baseline (absolute) risk is stated as 1.0 to keep from having to compare six digit percentages.  If a study shows that intervention A results in a relative risk of 1.2 we can say that intervention A increases risk by 20%.  Intervention B results in a relative risk of .8 or a 20% reduction in relative risk.  The trick with the higher numbers is to always remember to subtract the baseline score of 1 – for example the smoker’s relative risk would be stated as 5.0 which is a 400% increase in risk (not/not 500%).

By the way, when associations reach levels of 100% or greater they are considered by most scientists to be causations and not associations.  In other words, you can say with great confidence that smoking is a cause of lung cancer.  If it turned out that a high percentage of lung cancer patients had extra wrinkles around their mouths this would be an association until proven to be otherwise.  In this case it might turn out that chain smokers have more wrinkles around their mouths do to near constant puffing.  Wrinkles then are associated with lung cancer but are not causative.

Here then is the problem with many studies and their coverage in the press. A 10-40% increase or decrease in relative risk is interesting, but should be greeted with caution for several reasons. In many types of studies such an increase or decrease in relative risk could be caused by any number of factors OTHER THAN the one touted in the study.  Here are some issues by study design:

Interventional Study

If an interventional study shows a 40% increase or decrease in relative risk it certainly sounds impressive, but in our lung cancer example it would mean the absolute risk went from .000121 to .000169. Concerning, but not earth shattering.  Interventional studies if large, randomized, double-blinded, and employing a placebo arm are the gold standard and give us a pretty good idea of the intervention’s efficacy.  Even such studies are still prone to bias, post-hoc hypotheses, and faulty interpretation.  I’m looking at you statin trails.

Observational Studies

Observational studies are particularly problematic because they will trumpet such nonsense as “increased raisin consumption associated with a 40% increase in lung cancer risk.” The average person may read the headline and run to the pantry to throw out any boxes of raisins they finding hiding behind the soup cans. The problem here is two-fold because the absolute risk may still be quite acceptable and, more importantly, the association may be rather weak.  Observational studies simply don’t/can’t factor for a large number of variables though they try.

Epidemiological Studies (Studies of defined populations)

The worst studies known to mankind in my opinion are epidemiological studies of diet. An epidemiological study of raisin consumption in the United States would start with estimates of sales, estimates of spoilage, estimates of size or nutritional value or whatnot, and then divide all those assumptions by the total population thus assuming that everyone eats the same amount of raisins in the same time-frame and therefore fails to consider all the college kids binge eating raisins over spring break.

Before I make this next point – quick! Tell me what you ate for breakfast on Tuesday of last week! Many diet studies rely on Food Frequency Questionnaires (FFQ), or as Dr. Georgia Ede so wryly calls them “Food Fantasy Questionnaires. We can’t remember what or how much we ate so we make things up. Worse, people are embarrassed to admit that they eat Twinkies for breakfast so they LIE on the questionnaire. Consider also that what you ate for breakfast this morning is almost certainly NOT what you typically ate for breakfast 10 years ago and you have the next big problem – longitudinal studies that track participants health over many years (whether prospective or retrospective) and then purport to identify foods that kill and foods that protect based on a single FFQ at the beginning or end of the study period.

I thought I was done ranting, but there’s at least one more diet study issue to address and that’s studies that claim to look at “low carb” diets in lab animals and in humans. First the definition of low carb varies wildly. In my opinion a low carb diet should contain no more than 20% carbs. I’ve seen studies that call a 43% consumption of carbs “low.” Finally, it would now be considered unethical to feed humans trans-fats as part of a diet study, but scientists routinely do it to lab animals as part of their low carb/high fat studies. The resulting ill effects on lab animal health are then ascribed to the low carb diet and not/not to the high trans fats in the diet. Oversight or dirty pool?