Some clarification is needed surrounding the different definitions of “research,” especially concerning nutrition. While I appreciate that more and more people are interested in learning how to better fuel their bodies for athletics, disease prevention, and disease management, a world of misinformation has evolved and continues to grow. Let’s dive into why.
Warning: although I tend to like to have fun with these posts, this one is serious and necessary. It is also a bit long, but I hope you stick it out because there are some HUGE takeaways here.
Novice or Amateur Nutrition vs. Professional Nutrition
As an actual practicing nutrition professional for many years, nothing is more insulting than when someone implies that they need not discuss things further with a registered dietitian nutritionist (RDN) because “I already do my own [nutrition] research.” What this usually entails is the individual spends a ton of time on Google, typing in various things and reading all sorts of websites. While there are multiple consumer websites that offer varying degrees of basic information that is often reputable (depending on the source), this information is no comparison to an actual professional degree in the subject. Even if the person spends some time reading actual research studies, it’s worth mentioning that being educated on a deep level regarding how to actually interpret and analyze said research is a skill most don’t have unless you’ve taken a graduate-level research methods course (at a bare minimum).
Furthermore, a nutrition-specific professional degree is necessary to be able to truly dedicate the time and energy it takes to be a legit “expert” in nutrition. Most medical doctors and mid-level practitioners (MDs, DOs, PAs, NPs) do NOT have a foundation in nutrition at all. Most really good doctors will actually admit that. A nurse, pharmacist, physical therapist, doctor, EMT, etc. does not have the nutrition foundation to be considered any sort of authority on nutrition specifically. I respect whole-heartedly that these allied health professionals can offer a dimension of expertise that I cannot in terms of their respective fields; but being a healthcare professional does not automatically grant someone an expertise in nutrition. Sorry, not sorry. It is incredibly important for consumers to begin to understand the difference amongst healthcare providers. Additionally, a personal trainer is not the authority on nutrition. Unless they have the proper letters behind their names, they do not have the clout to provide evidence-based nutrition advice, PERIOD. Read about what the “proper letters” are here.
Although an educational session with an RDN could be construed as “basic,” the depths of knowledge required to translate decades of evidence-based nutrition science into concepts understandable by the general public is complex to say the least. Dietetics is the study of how nutrients impact disease prevention and treatment at the cellular level. However, for the average person to engage in a discussion about the intricacies of human physiology as it relates to nutrition absorption, utilization, and metabolism is not necessary in the big scheme of things. In other words, despite the RDN focusing on the basics surrounding dietary fat, sports nutrition, or any other nutrition conversation, rest assured that there is a mountain of “behind the scenes” information underneath that basic information. However, like with most people, the more simplified the information is, the better we are able to put it into action.
What Determines Research Legitimacy?
By definition, reading about information in any format, even if it is absolutely bogus crap, could be considered “research.” Reading reviews on a product on Amazon is “research.” But as it relates to nutrition, there is far more to what constitutes legit research. I am not going to attempt to teach a basic course in research design or analysis here, but I’m going to simplify some concepts in order to help you understand why rampant misinformation exists in the general public concerning nutrition. At a bare minimum when we look at research study designs, we end up with two broad categories of studies with multiple subtypes for each. These two broad categories are epidemiological studies and experimental studies.
Epidemiological studies can be broken down into multiple types, but the general definition of an epidemiological study is that it is the observation of trends in nutrition versus disease markers in a group over time.
- In some cases, observational studies are the only ethical way to study certain nutrition concepts. For example, it would be unethical to induce a deficiency of a nutrient in a group of human subjects only to determine its effect on disease and risk of mortality; therefore we can’t do clinical trials that entail depleting someone of vitamin A to see how quickly they go blind.
- Typically observational studies are performed over much, much longer lengths of time than any clinical trial is capable of achieving. Some observational studies have been going on for decades such as the Nurse’s Health Study, for example.
- Because of the huge groups of subjects observed and the incredible lengths of time we are capable of studying with regard to these epidemiological studies, they offer us insights that we are not capable of gathering from short clinical trials with smaller subject groups.
- Observational data is correlative, not causative. In other words, we can only determine that a trend exists in a group of people, but not what exactly caused that trend.
- Let me give you a non-nutrition example. We could observe over ten years the number of people who get speeding tickets on I-25 in Colorado Springs. Perhaps we could extract from that data that most of the people who were caught speeding happened to also be female and brunette. Does being female and brunette make you more likely to be a speed-demon? Probably not; it just so happens to be a qualitative factor of the data collected.
- Let me now give you a nutrition-related example. Suppose we look at some of the old data from the 70s and 80s surrounding heart disease. Many of the epidemiological studies in that era found that big groups of people who ate a lot of eggs were developing heart disease. Did eggs cause heart disease? Unlikely, especially if we look at all the other stuff the subjects were doing/eating. What if that particular group of subjects was completely sedentary, overweight, and didn’t eat a single fruit or vegetable any day of the week? What if they were eating eggs, sure, but also eating gobs of white bread and drinking a six-pack of soda daily? Was their heart disease related to the eggs or did it have more to do with all the other shitty habits they had? Observational studies are typically impossible to draw definitive conclusions on the role of a particular nutrient in health and disease for the reasons I just mentioned.
- When we talk about correlations, we are talking about GROUPS of habits that have a common outcome. But the modification of one of those habits cannot be construed as definitive evidence that it alone is the cause of the problem.
Just like they sound, experimental studies experiment with an intervention and a suspected outcome. They can offer insight to specific cause and effect of nutrients and conditions, which can be really helpful, but there’s more to it.
- In order to get down to the nitty-gritty with nutrition advice, we need to know cause and effect. If we observe a trend in a group (as noted in the observational study design), we can then perform specific clinical trials to test some of the theories we developed after those observations.
- In double-blind, randomized, placebo-controlled clinical trials, we significantly reduce the likelihood of bias influencing the outcomes on both the researcher side and the participant side.
- We can make some sound conclusions on whether the intervention influenced the outcome. For example, a sample of healthy people receives either a capsule full of omega-6 fatty acids, omega-3 fatty acids, or a placebo. We can measure their cholesterol before and after the trial, and see how each type of fat affected their serum lipids.
- Experiments cost money, which many researchers are short on. If they receive funding from a source, sometimes that can influence the outcome of the trial. If Big Soda gives you money to test the effects of sugar on heart disease, it is impossible for me to believe they aren’t breathing heavily down your neck as a researcher to produce an outcome favorable to them…sorry; I simply can’t believe it.
- Because trials are expensive, they are often relatively short in duration. In other words, trials tend to not be several years or several decades long; therefore we lack sound data on true lifelong effects of nutritional habits when examining clinical trials.
- Confounding factors still make it difficult to truly extrapolate concrete information from even the best, unbiased studies. In the example I gave in the PROs list regarding fatty acids, what if one of the groups had a profound increase or decrease in cholesterol? Can we agree 100% that it was independently due to the intervention they received? Nope! We can learn from it, but we can’t account for any of the other lifestyle influences those subjects may have had. For example, what if all the subjects sat on their butts and didn’t exercise except for the omega-6-fatty acid group? We would expect their cholesterol to be affected by that type of fat, but maybe their sudden enthusiasm for exercise skewed the results? What if the omega-3-fatty acid group also decided to go on a salad-eating binge during the trial, and this influenced their reduction in cholesterol? You see, even the absolute best, unbiased clinical trials can have flaws we HAVE TO consider.
- Size matters. Many clinical trials are done with too few people and for too short of a duration to make a true conclusion on whether or not that was an effective intervention.
- This is similar to the previous point in that you see a group of habits where researchers are tracking the modification of one of those habits; but they are not always paying attention to the other multiple habits going on simultaneously. They are drawing conclusions based on the modification of a single habit/single control group/or whatever. Somebody could go vegan and start running and quit drinking soda all in the same day—which one contributed to their miraculous transformation?
I recently had to participate in an evidence-analysis project (similar to a systematic review) in one of my graduate classes. Our topic was “The Role of Sugary Beverages in Promoting Inflammation Leading to CVD.” We used a specific, comprehensive set of key words for our literature search. We then eliminated review studies, as we were required to specifically use clinical trials for our analysis. We then eliminated any studies that were not performed within the last five years. We then eliminated any studies that didn’t specifically deal with sugary beverages (not cookies and not candy; sugary beverages specifically). We ultimately ended up with eleven trials to review. Only eleven, and this is a hot topic!
Once we went through the Academy of Nutrition and Dietetics Evidence Analysis Worksheet to dissect each trial, we had to determine if it had a large enough sample size, was seemingly free of bias, had a long enough duration, etc. You would be surprised to find the very limited amount of plausible data that came from this extensive review of the current literature. For this reason, I nearly died laughing yesterday when a layperson selling alkaline water told me “there are 3,017 studies to support it [alkaline water consumption]; maybe you’re not reading the right studies.” I can’t…this is so incredibly, stupidly, hilariously inaccurate.
Despite this wordy post, I have barely begun to scratch the surface regarding the intricacies that could be addressed regarding study design and efficacy. This is why when someone, whether a patient or morning news show, says “a recent study showed that…” I automatically cringe because all that stuff I just explained regarding study design, length of trial, confounding factors, sample size, funding source, etc., will not be considered; the person or news show will simply take that [possibly ridiculously crappy quality] study as absolute nutrition gospel from here forward. Tragic, isn’t it? Note from my political scientist husband: “ A reminder that the media will cherry-pick any study that helps them sell product/advertising, which usually generates fear in a consumer (or something designed to generate fear in a consumer).”
For now, I must digress. But real nutrition research is not a review of the latest .com website. Online articles can simply not be considered on-par with a professional degree in nutrition affording you the expertise to be able to decipher the information properly; IT SIMPLY CANNOT. A website, or even one or two studies is not enough evidence to forge a true conclusion on cause/effect of a nutrition parameter. Googling isn’t research. You wouldn’t (I hope) go get dental work from someone who was an avid Google-researcher on dentistry, right? Doubtful. You’d probably go to an actual dentist with an actual accredited degree in dentistry. The same thing goes for nutrition counseling. It’s time the world realized it! Get some actual nutrition credentials before you start spewing out nutrition advice to anyone. If you’re seeking nutrition advice, seek it from someone who has the credentials to back it up (ps: there are only few).
If you made it this far, thanks for sticking it out for this lengthy post. I really appreciate it; please share what you’ve learned with others.
See you soon…
Xoxo – Casey