The burden of proof of a therapy or approach can vary according to the relative risks and benefits.
Although some study designs are considered stronger than others, interpreting study results involves assessing the trade-offs between highly controlled situations and relevance to real life, then using the evidence that makes the most sense in the situation.
Therapies that are not especially dangerous and that have credible evidence that they may be helpful do not need as much proof of benefit as therapies that involve more risk or expense.
When assessing the reliability of a book, website or other resource, consider the authors' qualifications and financial ties,
BCCT recommends checking sources and validating information obtained from sources with any financial stake in therapies.
Search the Internet for “cancer therapy” and you will find tens of millions of webpages. How can you sort through to find the few pages that will be most valuable for you? How can you possibly know which websites or information to trust? How do you analyze and find clarity when "experts” give opposite recommendations? How can you tell the "snake oil” from truly valuable therapies?
First, please understand that BCCT does not claim to have all the answers. As we pull together our information and summaries, we struggle with these same issues: Is this source reliable? Is this research valid? Is this expert qualified to speak about this therapy?
However, even as we are always learning ourselves, we share with you some guidelines for determining whether a website or article rates higher or lower on a trust scale. You may also want to read about our approach to integrative cancer therapies.
Nancy Hepp
Type and Strength of Evidence
Highlighted Videos
BCCT project manager Nancy Hepp, MS, explains what evidence BCCT summaries are based on.
BCCT advisor Donald Abrams, MD, describes the difference between "evidence-based" and "evidence-informed" approaches in a 2014 presentation.
BCCT advisor Mark Renneker, MD, explains how to get accurate information about cancer.
Evidence Trade-offs
Evidence that a therapy “works” runs a whole range from completely unreliable to trustworthy. We present an overview of some of the issues in determining whether evidence is reliable and appropriate, first from a researcher's viewpoint, and then from a clinician's frame.
The Researcher's View: Hierarchy of Evidence
We present 12 levels in a research hierarchy of strength of evidence, starting with unreliable evidence and working down through increasingly credible sources of evidence (all examples are fictional). The evidence toward the bottom of this list (larger numbers) is generally regarded as stronger and more reliable than evidence toward the top of the list.
People without training or credentials provide their overall impression of a therapy. Examples: “This stuff is great!” or “This therapy has improved my life!” or “This therapy doesn’t work.”
People without training or credentials provide specific information about how a therapy worked for them. Examples: “My pain disappeared in two days” or “I was finally able to sleep through the night.” Side effects and other risks of a treatment are not often reported.
A medical provider provides information from their personal experience treating patients. Example: “During 32 years in medical practice, I’ve seen this treatment help hundreds of patients with hot flashes.”
Cells vs. People
Studies on human cells can be helpful in finding effects of drugs, radiation, natural compounds and other potential therapies on tumors. However, isolated cells or tissues in a highly controlled lab may behave very differently from tumors and other cells in real human beings.
Drawing conclusions from cell studies is fraught with the potential for errors, a little bit like predicting children's final career successes from their performance in kindergarten. Yes, some differences hold all throughout the many levels and experiences on the way to the final goal, but many other intervening variables can change the outcome.
While cell studies are good markers for therapies to explore further, lab results alone are not good evidence of a therapy's ultimate effects. In our therapy summaries, we list clinical evidence first, and then we include lab and animal evidence for further insights. When no clinical evidence is available, lab and animal evidence is offered, but we do not consider it strong evidence.
One patient or a small group is treated and observed over time. Case studies can be published in medical journals. Examples: “Doctors report that six patients given this treatment experienced less nausea and vomiting than before treatment” or “Ms. X was followed for seven months on this treatment, with these results.” Both benefits and burdens such as side effects are reported.
Patients are asked to remember what therapies they used in the past, and researchers look for patterns and compare them to current health status. Patients are not always accurate in remembering or reporting past practices, which is a serious problem in these studies. Example: “Five thousand breast cancer patients were asked about their diets for the last 20 years.”
Patients’ treatments are observed and recorded by researchers, or patients are asked to record therapies they use in the present, and researchers look for differences in outcomes based on differences in therapies. Example: “Five thousand breast cancer patients were asked to keep food journals for six months following their diagnosis.”
Observational studies may follow a specific group of people—called a cohort—over time to track outcomes.
Observational studies show a relationship between a therapy and an outcome, but they do not show that the therapy caused the outcome.
Similar animals are divided into two or more groups, with one group receiving the treatment and others given no treatment (or a different treatment).
9. Small prospective, experimental clinical studies, sometimes called “pilot studies”
Blinded Trials
The practice of “blinding” trials of treatments—so that patients are not aware of which treatment they receive—is held up as a hallmark of rigorous research. Blinding the physicians or practitioners to the treatment (a “double-blind study”) is considered even better.1
This blinded approach can be achieved when the treatment is a pill or injection. However, many treatments are not adaptable to a blinded study. Blinding can be
Impractical: making new foods appear to be the same as the foods that patients typically eat or preventing people from finding out if the magnets they are using are real
Ridiculous: performing surgery that only pretends to remove a tumor or implant a device, or that implants a phony device.
Impossible: blinding patients to whether or not they have exercised or participated in a support group
In cases where blinding is not appropriate, other study designs can provide valuable and rigorous results as long as researchers are transparent about the relative strengths and limitations of their selected study design.2
Two or more small groups of patients (typically less than 100) are made as similar as possible or are randomly divided into groups. Groups receive different treatments, or different levels of a treatment, and typically one group receives standard care or no treatment (perhaps in the form of a placebo). The health outcomes of the various groups are compared. Sometimes the groups switch treatments after a period to further determine whether the health outcome is due to the treatment or to differences in the patients. Patients and even their healthcare providers may not know which treatment they are receiving (a “blinded” study).
10. Large prospective, experimental clinical studies
A study starts with several hundred patients, with even larger numbers considered stronger. This group is divided into two or more groups, with random assignment considered a stronger study design. As in smaller studies, placebos or blinding may be used. The health outcomes of the various groups are compared. Note that sometimes study effects (the effects of the treatment) can seem modest for each patient, but a statistically significant effect is found for the whole group. Sometimes only a minority of patients show any effect of the treatment, but the treatment is considered a success for those patients. Also, sometimes the comparison group is not sufficiently well designed to make sound conclusions about the effects of treatment.3
Researchers analyze and synthesize the results of all studies to date involving a treatment to find patterns, similarities and differences in study results.
Researchers conduct a review and also combine the results of many studies. This approach can often find more subtle effects that may have been overlooked or dismissed in the individual studies. A meta-analysis may be able to find reasons that smaller studies found opposite outcomes.
A panel of medical researchers reviews all the evidence to date and concludes that a therapy fits into categories of recommendation for specific medical conditions. The following examples of categories are based on clinical practice guidelines from the Society for Integrative Oncology:4
Strong recommendation in favor of use:
High- or moderate-quality evidence shows that benefits clearly outweigh risk and burdens.
Weak recommendation in favor of use:
High- or moderate-quality evidence shows that benefits are closely balanced with risks and burden.
Weak (inconclusive or conflicting) evidence leaves uncertainty in estimates of benefits, risks and burden; no clear advantage is shown over other options.
Weak recommendation against use:
High- or moderate-quality evidence shows that risks and burden are probably greater than benefits.
Evidence shows no advantage compared to other options, while risks and burdens may be greater.
Weak (inconclusive or conflicting) evidence leaves uncertainty in estimates of benefits, while risks and burden are established.
Strong recommendation against use:
High- or moderate-quality evidence shows that risks and burdens clearly outweigh benefits.
Observational studies that come closer to real-life situations may provide more valuable information for clinical use than studies that are considered the “gold standard.”
The Clinician's View: Therapies in the Real World
Progression-free Survival
In assessing the effectiveness of cancer treatments, the outcome being measured can make a big difference in interpretation. For example, studies reporting only progression-free survival (PFS) may find effectiveness, yet many patients do not perceive much benefit.
PFS refers to the time before the cancer progresses in response to a treatment regimen—a short-term measure of treatment effectiveness. It is not as powerful an outcome measure as overall survival (OS).
In a systematic review and quantitative analysis study, researchers found that progression-free survival is not significantly associated with quality of life in people with cancer.5
Dr. Li Xie, one of the study authors, advises patients and their doctors to consider that, "In cancer patients, there are two important things when evaluating a therapy: whether it extends [overall] survival and whether it improves quality of life (even if it doesn't extend survival)."6
PFS as a short-term measure may under- or overestimate the value of a medication or treatment.7
The Gold Standard: Why Randomized Controlled Trials Don't Always Tell the Real-World Story
Many oncologists rely heavily on the outcomes of "gold standard" randomized clinical trials (RCTs) in recommending therapies to patients. Some researchers tell a more complex story. RCTs try to control all variables, but in the real world, outcomes may be different. That's why observational studies—what clinicians see in practice—actually matter.
GRADE (Grading of Recommendations Assessment, Development, and Evaluation) is a formal, scientific rating system. Research studies are rated according to the strength of the study design and the rigor of their conclusions. Similar to the hierarchy of evidence above, GRADE prioritizes randomized controlled trials but also considers information from observational studies when necessary.
Even the best research studies have limitations. "Cause and effect relationships between single risk factors and disease outcomes will always be difficult to establish with certainty."8 In other words, even when only one risk factor contributes to a disease, proving that the risk factor causes the disease is difficult. But cancer is a complex disease, with many related and interacting factors contributing to risks of both incidence and progression of the disease. Establishing cause-and-effect relationships with complex diseases is nearly impossible.
"Complex, multifactorial diseases are not easily studied by carefully controlled epidemiologic investigations that meet GRADE criteria."9 Without absolute certainty or proof, which we will not likely ever achieve, we are left to act based on the best available evidence. BCCT summarizes the best available evidence about the effectiveness, safety and use of a wide array of therapies. Large gaps exist in our knowledge, but we believe enough evidence has accumulated on many therapies to support their thoughtful, supervised use.
Even though some study designs are considered much stronger, a trade-off exists: To achieve the amount of control needed for experimental studies showing causal relationships, clinical conditions need to be so precise that they can lose any meaningful relationship to “real world” contexts. For example, researchers may control for these variables:
Age and general health of the subjects
Sex (selecting only male or only female subjects)
Specific diagnosis and disease stage
Co-morbidities (excluding people with other health conditions, such as heart disease, kidney disease or diabetes)
Lifestyle factors including diet, tobacco use or alcohol use
Even in rigorous studies, additional factors may not be controlled, such as patient use of alternative therapies, the patients’ levels of stress or distress, genetics, sleep quality and so on.
Because real patients in clinical situations are each individuals with huge variations in their life situations as well as their disease states, the results of highly controlled studies may not apply to very many patients. In some cases, observational studies that come closer to real-life situations may actually provide more valuable information for clinical use than studies that are considered the “gold standard” by researchers.
The Evidence House: Valuing What the Physician Sees in Practice
BCCT advisor Wayne Jonas, MD, explains that the strongest study designs don’t always provide the best evidence:10
As most clinicians know, the reasons that patients recover from illness are complex and synergistic, and many cannot simply be isolated in controlled environments. The best evidence under these circumstances may be observational data from clinical practice that can estimate the likelihood of a patient's recovery in a realistic context.
In addition, patient's illnesses are complex physical, psychological, and social experiences that cannot be reduced to single, objective measures. In some cases, the most valuable information for a clinical decision is a highly subjective judgment about life quality. This personal experience of illness might be captured only through qualitative research, not using questionnaires or results of blood tests. The “best” evidence under these circumstances may be the meaning that patients give to their illness and recovery.
At other times, the “best” evidence comes from laboratory studies. The discovery that St John's wort can reduce blood levels of immunosuppressive drugs, for example, is the most crucial evidence when making decisions about its use in patients taking immunosuppressive medications. Findings of controlled trials often do not reveal such drug interactions. Arranging types of evidence in a “hierarchy” obscures the fact that sometimes the best evidence is not objective, not additive, and not clinical.
Evidence House illustration; click to open the full journal article
Jonas has proposed an “evidence house” with different “rooms” and different “wings” for different audiences and purposes:
Rooms in one wing of the house contain types of scientific information such as laboratory research. These rooms seek to find causes of disease, how therapies work, and proof of effectiveness—types of information that can be difficult to determine or that may take many decades.
Another wing has rooms with information about therapies’ relevance and usefulness in clinical practice rather than absolute proof of effectiveness.
“If resources are disproportionately invested in certain rooms of the house to the neglect of others, it is not possible to obtain the evidence needed for full public participation in clinical decisions...Each has different functions and all need to be high quality.”11
In sum, researchers may have a strong study design and get convincing evidence that isn’t especially relevant to actual patients and clinical care. Interpreting study results involves assessing the trade-offs between highly controlled situations and relevance to real life, then using the evidence that makes the most sense in the situation.
"The Lower the Risk of Harm, the Lower the Burden of Proof"
Experimental therapies both in mainstream cancer medicine and in complementary medicine can be considered science-informed rather than fully science-based. Because the scientific process can be slow to accumulate enough evidence to be conclusive, and because cancer patients often don’t have decades to wait for rigorous research results, science-informed therapies are often the only option beyond standard therapies. These might include therapies supported by case studies and observational studies, or for which a strong theoretical rationale exists but empirical studies are unavailable, incomplete or inconclusive.
Therapies that are not especially dangerous and that have credible evidence that they may be helpful do not need as much proof of benefit as therapies that involve more risk or expense.
BCCT advisor Dr. Donald Abrams makes another critical point with respect to science-informed therapies: The lower the risk of harm, the lower the burden of proof. The burden of proof is lower if a therapy meets these criteria:
It is unlikely to do harm.
The patient considers it affordable.
The patient is drawn to it or believes it may have value.
Therapies that are not especially dangerous and that have credible evidence that they may be helpful do not need as much proof of benefit as therapies that involve more risk or expense. BCCT views the use of science-informed, low-risk, affordable therapies as a reasonable option for patients. Stronger evidence of benefit is needed for therapies that are risky, expensive or otherwise burdensome.
Financial Ties
Websites or people who are trying to sell something have an incentive to make their product look the best that it can. Even when sales people have honorable intentions, benefits can be unconsciously promoted and potential harm downplayed. However, just because a website sells products doesn’t mean their information isn’t valid. BCCT recommends that you check the information on a site against other highly credible sources, which we describe below.
Websites or people who are trying to sell something have an incentive to make their product look the best that it can.
Financial ties operate in critical reviews, also. If an expert is critical of a treatment, consider whether that treatment competes with another treatment that the expert is tied to financially in some way. For example, a scientist from a pharmaceutical company may criticize the claim that a complementary therapy (meditation or yoga) can relieve pain in place of a pain medication. Again, check the information against other highly credible sources.
For websites or experts that aren’t selling products or services, check to see if they are transparent about their funding. Does the site tell you who sponsors or supports the site and information?
Author Qualifications
Red Flags
A few “red flags” cause us to question the value and validity of some sources:
Authors:
Authors and sponsors are not identified.
Conflicts of interest (such as an author’s or “expert’s” ties to manufacturers or organizations) are hidden.
A site or person claims that one therapy (likely a therapy for purchase on the site) can cure cancer.
A site or person purports to have connections to God or spiritual forces that users must pay to access.
A site or person is excessively critical of approaches different from theirs or pushes their therapy to the exclusion of others.
Evidence:
Testimonials from users or sellers are the only source of evidence.
No references or details are provided for unnamed studies purporting to prove a therapy is effective.
Studies cited are not published in reputable, peer-reviewed journals. Some “pay to publish” journals will publish anything from authors, regardless of its scientific merit.
A person or site dismisses those who criticize their product or service without any evidence.
Information is not dated.
Sites with these characteristics, and especially more than one of these, should be treated with a great deal of caution. BCCT recommends validating any information from these sites through more transparent or authoritative sources.
The qualifications of authors should be described. Formal education and training, experience and independent investigation are all valid qualifications. These should be made available to the reader. Any conflicts of interest, whether financial or organizational, should be listed for authors.
BCCT’s Approach to Information
When we evaluate claims regarding therapies and treatments, BCCT strives to consider both experts’ financial interests and where they draw evidence from. We indicate in our footnotes where our information comes from and provide a link if possible so that you can check the source yourself. We look for the most credible sources available.
Clues to be skeptical and to verify any information through more reliable sources:
Is this person or website encouraging you to buy something?
Does this person or site make only vague statements about effectiveness without any evidence?
Do you have to pay for a product or therapy before you can receive specific information about it?
Sources We Trust
KNOW Ocology Resource
Evaluating Information in Social Media
Quite a lot of both outdated and inaccurate information is available online and is passed along through social media.
A 2018 study evaluated the accuracy of 150 videos on prostate cancer screening and treatment posted on YouTube. Study findings:12
Few videos provided summaries or references or even defined medical terms.
Videos with lower scientific quality were actually met with higher viewer engagement: more views and more "thumbs up" ratings and comments.
Comments often contained advertising and peer-to-peer medical advice.
We encourage our readers to check the dates on information, cross-check claims with reliable and authoritative science-based sources, and validate claims as far as possible before investing much time or money in miracle cures.
Our Resources collection includes many books, websites, videos and other resources that we have found to be trustworthy. However, we are open to critiques from our users of these resources. We are deeply grateful to our users for alerting us about resources that should be reconsidered or removed.
Written by Nancy Hepp, MS and reviewed by Laura Pole, RN, MSN, OCNS, and Michael Lerner, PhD; most recent update on February 22, 2021.
Thiese MS. Observational and interventional study design types; an overview. Biochemic Medica (Zagreb). 2014;24(2):199-210; Stürmer T, Brookhart MA. Chapter 2: Study Design Considerations. in Dreyer NA, Nourjah P, Smith SR, Torchia MM, editors. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan. AHRQ Methods for Effective Health Care; Panagiotakos D, editor. Study design. BMC Medical Research Methodology. Viewed March 16, 2018.
Thiese MS. Observational and interventional study design types; an overview. Biochemic Medica (Zagreb). 2014;24(2):199-210; Stürmer T, Brookhart MA. Chapter 2: Study Design Considerations. in Dreyer NA, Nourjah P, Smith SR, Torchia MM, editors. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan. AHRQ Methods for Effective Health Care; Panagiotakos D, editor. Study design. BMC Medical Research Methodology. Viewed March 16, 2018.