Understanding, Evaluating, and Communicating Nutrition: A Researcher’s Perspective

small-plant.jpg

(First in a series of three articles.)

The relationship between nutrition and health is fully entrenched in the mainstream media, and everyone from career scientists to our next-door neighbor seems to be an expert on the topic. Becoming skilled in research evaluation, being aware of media perspectives, and understanding different forms of bias are extremely important in this rapidly evolving field.

We recently interviewed Dr. Andrew Brown of the University of Alabama-Birmingham’s Office of Energetics and Nutrition Obesity Research Center, whose research group has a strong voice in the discussion of research evaluation and scientific integrity. In the first part of this series, Dr. Brown discusses the foundations of study design, which are vital to understanding research publications.

FOOD INSIGHT: Can you explain the differences between observational and experimental studies?

ANDREW BROWN: Observational studies and experimental studies describe two very broad classes of studies. In observational studies, the researcher tries to look for relationships in the world as it is. In experimental studies, the scientist tries to determine whether a change in the world will alter an outcome.

To give some examples: An observational study may ask whether people with higher biomarker levels of  nutrient have a lower incidence of a disease than those who have lower levels. Think of blood concentrations of omega-3 fatty acids and heart disease, for instance. This type of study looks at how those characteristics relate to health outcomes.

This can be a very different question from determining whether changing the biomarker (e.g., increasing omega-3 intake) causes a change in the disease compared to not changing the biomarker. The former example describes how characteristics and outcomes exist together; the latter helps determine whether changing people will change their outcomes.

Often, observational evidence will be described as “hypothesis generating,” and experimental evidence will be described as “establishing causation.” This is because there is always the possibility that there is an underlying possibility that the observations occur together by chance. For instance, it could be that people with higher blood concentrations of omega-3 fatty acids are in some way systematically different than people who don’t, and those same people also have lower heart disease.

This gives rise to the maxim, “Correlation does not equal causation.” Of course, this is an oversimplification, and there are entire books and courses dedicated to refining these distinctions. For example, when is an observationally derived hypothesis worth testing? How does an experiment have to be designed to generate certainty in causation? In which cases are observational studies good enough to assume causation?)

FI: Would self-reported intake data be considered observational? If so, what can we make of these data?

IFIC’s Guide for Evaluating Evidence
 

AB: Self-reported data can be used in both observational and experimental studies. For instance, an observational study may ask people what they eat, and then correlate the responses with cancer incidence. Similarly, an experiment may change the characteristics of a menu and then ask people what they ordered.

Self-reported intake data as estimates of actual intake are always questionable because people are generally bad at remembering or reporting what they eat. Many different methods have been used to adjust self-reported intake, but unfortunately very few of them have been compared against a gold standard of documenting what people actually ate.

Even then, in the cases where these self-report methods have been compared either to what people actually ate, realistic biological limitations (e.g., whether or not someone reported eating enough to even remain alive), or with alternative methods, the self-report data do not tend to hold up well.

There is ongoing debate about whether self-reported intake of particular foods should be used to draw scientific conclusions and, if so, then in what settings. At the very least, there is a consensus that self-reported energy intake (as opposed to intake of specific foods) fails to estimate actual energy intake so badly that it has misled obesity science for years.

Research continues to use self-reported data without adequate guarantees that the data represent what the researchers think they do, and thus all self-reported dietary intake studies should be interpreted with caution.

FI: What are more objective measures that we should be looking for in studies?

Read our recent blog post about current research on breakfast consumption, eating behaviors, and dietary intake.
 

AB: The measures that we should look for in studies depend, of course, on what is being studied.

If we are trying to look at an outcome, like what the effect of a treatment is on obesity, then we want to see obesity measured. How much food is consumed or how much energy is expended is interesting but cannot reliably be extrapolated to the outcome itself (e.g., obesity).

If, on the other hand, the research is focused on intermediate outcomes, like how changing a menu or buffet alters food choices, then biomarkers, photogrammetry (quantitative photography of food), or external observers are better than asking the people themselves.

These methods are being continuously refined. They also depend very much on the observers and analysts (and hopefully the participants) being blinded to the interventions because the humans involved with the study may consciously or unconsciously bias the results.

FI: What are meta-analyses and systematic reviews? Can you talk about the difference between them?

AB: When we talk about systematic reviews, we usually refer to an exhaustive, comprehensive review of the literature conducted in a way that someone else should be able to reproduce it.

After a systematic review, the data may be synthesized across studies (the analyses are analyzed together, thus, “meta-analyzed”). Systematic reviews typically involve defining a set of inclusion and exclusion criteria around the acronym of PICO: patient or population, intervention (or exposure), comparison or control, and outcome.

For example, we conducted a systematic review and meta-analysis of studies including otherwise healthy individuals (population) who were given recommendations to increase intake of, or actually received, fruits and vegetables (intervention) compared to groups that did not (comparison) on measures of obesity (outcome).

Meta-analyses are meant to pool the data together into one summary estimate. A scientist can conduct a systematic review without conducting a meta-analysis, but meta-analyses are typically completed only after a systematic review.

All kinds of data can be analyzed together, but the quality of the meta-analysis is completely dependent on the quality of the original studies and whether or not they are truly comparable. “Garbage in, garbage out” certainly applies here.

Unlike narrative reviews, systematic reviews will hopefully contain less researcher bias, acknowledging, of course, that nothing is ever completely free from our human nature. Narrative reviews are the process of taking data or hypotheses that exist in the literature and narratively synthesizing the information.

This can be important when trying to explain complex or diverse topics. For instance, we wrote a narrative review about endocannabinoids that tried to explain what the current thoughts were on mechanisms of action, possible biological effects, metabolism, and other aspects of the biology of the compounds. This is a completely different purpose than trying to summarize what the effects of the compounds, treatments, and exposures are on health in an unbiased, comprehensive manner, which is typically the goal of a systematic review.

Narrative reviews are subject to selection biases, meaning that the authors (including myself) have a tendency to write about what they know or what they have come across. Systematic reviews are structured to have scientists look beyond what they know.

Read about Bias and Balance in research evaluation.
 

There are other uses of the term “systematic review,” however, that refer to pulling studies together in a non-exhaustive but reproducible way. One such study pulled ingredients from a cookbook and investigated the first subset of articles they came across in the literature.

This is neither comprehensive nor exhaustive, but it is systematic. This use of systematic review is much less common.

Similarly, meta-analysis can just mean some form of pooled analysis across studies (e.g., counting how many women were studied in each paper), but typically researchers mean quantitatively summarizing the outcome data. It is important to know what the authors are trying to communicate when reading such a paper.