Building Evidence for Naturopathic Medicine Are We Chasing Fool’s Gold?

· Research

Abstract

In an effort to bridge the disparity between conventional and nonconventional healthcare practices, research evidence has been seen as a commonality that may provide a link between the different schools of thought. Considerable effort has been expended on the evidence based practice agenda and there is strong demand for nonconventional practices such as naturo- pathic medicine to produce research evidence in support of its therapeutic applications. Whilst few would disagree with basing patient care on information about what works, there remains significant challenges about what evidence is, and how practitioners use it in clinical practice. From the naturopathic perspective, a number of issues must be addressed in order to attain meaningful and applicable information. The purpose of this paper is to examine the nature, meaning and role of evidence in naturopathic practice; and to consider some creative and integrative research designs and methods within the naturopathic context.

Introduction

The increasing drive to adopt evidence based medicine within the conventional or mainstream healthcare services has given a sense of urgency for those in the nonconventional fields to follow suit (1) . The term "evidence" is a fashionable buzz word these days in healthcare. We are inundated with terms such as "evidence based practice", "evidence based guidelines", "evidence based decision making", "evidence based policy making", and even "evidence based patient choice" (2) . But what is meant by "evidence"? The considerable effort expended on the evidence based practice agenda by conventional medicine has created a strong demand for nonconventional fields such as naturopathic medicine to produce research evidence in support of its therapeutic applications. Of course, as practitioners, we want good evidence when making clinical decisions. Whilst we want to deliver care with the utmost knowledge on what works, significant challenges remains with respect to the nature of the information that is construed as evidence, and practi- tioners’ use of it in making decision in clinical practice (2) .

The goal of obtaining evidence through research seems straight forward enough. However, a major challenge for researchers and practitioners is the difficulty of applying the conventional research model to the nonconventional paradigm. The seemingly opposing natures of the disciplines (reductionistic view of the conventional paradigm and wholistic view of the nonconventional paradigm), brings the nature, meaning and role of evidence into question (3) . The nature of evidence may depend on the theory of disease and the healing principles that underlies a particular school of practice. The conventional conviction that there is a hierarchy for determining the "best evidence" is a misleading belief(4) . The widely adopted concept of evidence based medicine (EBM) gives a narrow view of what is considered "good or scientific" evidence, namely the randomized controlled trial (RCT) or meta- analysis of RCT studies (4) . Critics have noted that such strict requirements fail to capture the fundamental principles of nonconventional practices such as naturo- pathic medicine (1, 3, 5, 6) . Thus, appropriate outcome measures must reflect these concepts in order to provide meaningful evidence. The purpose of this paper is to examine the meaning and role of evidence from a naturopathic perspective in order to gauge:

  1. the nature of the evidence that reflects naturopathic principles;
  2. the meaning of evidence to naturopathic practice;
  3. some innovative research designs and methods for studying therapies used in naturo pathic medicine; and
  4. the role of research evidence appropriate to the underlying principles of naturopathic practice.

The nature of evidence

Evidence comes in a variety of forms and for various purposes. Evidence can range from clinical experience (including patient experience) to expert opinion to pathophysiological rationale to laboratory findings. Naturopathic evidence has mainly been empirical, coming from historical and experiential sources, while in conventional medicine the sources of evidence have come from research and clinical/experiential areas. The emphasis on controlled (laboratory) research is a recent development. Practical knowledge has largely been seen by conventional medicine as idiosyncratic, subject to bias and thus, lack credibility (2) . However, those of us who are clinicians know that we rely heavily on our own practical knowledge as well as draw on the expertise of others to inform our practice. Thus the question becomes ‘how may this form of knowledge be explicated and critically evaluated in order for it to become acceptable as evidence?’ (2) After all, research evidence can only become more powerful when it matches clinical experience.

The reasons behind recovery from illness are often complex and synergistic. In reality, the recovery process cannot be captured or isolated in controlled environments (6) . In addition, a patient’s illness, such as the many chronic conditions seen by naturopathic doctors, are a result of complex physical, psychological and social interactions that are difficult to measure as single, objective outcomes (7) . The nature of evidence in naturopathic practice must reflect the concepts of individualized treatment, the patient-doctor interaction, treating the whole person, and the concurrent use of multiple therapies in the healing process. Moreover, complex theoretical concepts specific to naturopathic medicine need to be considered. For example, the concepts of the physiological terrain, the vital force, the individual’s constitution, a person’s chi (qi) are all relevant to the basis of naturopathic practice (7) . The question becomes how these concepts relate to the nature of the evidence and how can researchers incorporate these concepts into their research design. The evidence agenda needs to consider a variety of sources of evidence and how each source contributes to clinical practice.

The meaning of evidence

In order to find the meaning of evidence, we must ask, ‘what accounts for evidence in naturopathic practice and education?’ The meaning of evidence is more than the types of information extracted through research. Presently little is know about how naturopathic doctors make use of evidence, that is, what does evidence mean to a practitioner and what role does evidence play in patient care? For example, issues regarding standardization, efficacy and safety are prominent conventional objectives for research. However, the dominant view of how those areas need to be addressed is often not considered within the naturo- pathic context.

While standardization in the conventional context appears to be incongruent to naturopathic medicine, the definition of standards and how they are determined must reflect the reality and the principles of practice. Standardization for active ingredient in a botanical extract or the amount of vitamins and minerals in a supplement is necessary to ensure therapeutic effect. However, standardizing a strict protocol that can be generalized to a group of patients goes against the individuality of care in naturopathic medicine 1 . We must make a clear distinction between standardization of a product versus standard setting in practice to ensure quality of care.

Efficacy and safety are two main issues where the demand for research has been strong (9) . While safety issues are of great concern, especially in monitoring potential side effects and interactions with pharmaceuti- cals, safety cannot exclusively be established in a laboratory setting. Safety of a therapy must be established within the context of usage. For example, Ephedra has been safely used, clinically and tradition- ally, for respiratory conditions; but adverse events appeared after its incorporation into weight loss supplements to speed up metabolism. The context of use is an important component of sound evidence. Efficacy is also difficult to ascertain (10) . From the conventional medical perspective, an intervention only needs to benefit one individual in order for it to be considered an effective therapy (3) . A clinically ineffective intervention is one that would not benefit any individual patient for any specific disease. Any degree of improvement as reported by the patient can be considered as direct evidence of benefit. Indirect evidence, such as the p value from statistical test, can be obtained through clinical trials. Tonelli and colleagues believe that whether we prefer direct evidence from primary experience or indirect evidence from clinical trials is a philosophic choice, not a scientific necessity (3) . Naturopathic doctors have purported that efficacy is obtained through their clinical experience, but there is little systematic documentation of this form of evidence. Thus, the challenge is to ensure that the evidence is obtained in a robust, timely and systematic manner. Furthermore, efficacy must reflect the nature of practice, such as the use of individualized treatments and multiple therapies.

Hierarchy of evidence

In a discussion on evidence, we cannot escape the seemingly authoritative aura of the hierarchy of evidence that guides the "best available evidence" in recent years (4) . In the mid 1990’s, Sackett and colleagues came up with the now famous definition of evidence based medicine (EBM) as the "conscientious, explicit and judicious use of current best evidence about the care of individual patients" (12) . In recent years, the term EBM has been synonymously linked with good or scientific evidence, where the RCT has been touted as the gold standard for research design (13 – 15) . The demand has been that nonconventional fields such as naturopathic medicine validate their systems of care based on this hierarchy; that the RCT be the judge on how therapies perform.

The evidence hierarchy is constructed based on the idea that different study designs provide different levels of evidence. The best or strongest evidence, according to the hierarchy, comes from meta-analyses and RCTs, while cohort and case control studies occupy the middle level, and cross-sectional surveys and case reports lie at the bottom. However, there is evidence to suggest that the hierarchy is not always "fixed"; sometimes observational studies and RCTs will yield similar effect estimates. As well, all study designs have potential weaknesses, and therefore a highly rigorous cohort study should supersede a poorly conducted RCT (4, 16) . In fact the hierarchy has not been supported in two recent publications in the New England Journal of Medicine which identified nonsignificant differences in results between RCTs and observational studies (17 – 19) . It begs the question, are we chasing fool’s gold?

The evidence hierarchy disregards the issue of method- ological aptness (i.e. the match between the research question and the design strategy) (4, 20) . For example, from a naturopathic perspective, questioning about how person’s physiological terrain has an impact on their health and their response to treatment, then the "best" design (i.e. the RCT) is not appropriate. Hierarchy is formed to assess quality of evidence for establishing cause and effect; not all research is about determining cause. Petticrew and colleagues proposed that an evidence typology may be more appropriate to address this issue of methodological aptness (20) . So that matching research questions to specific types of research design would provide more meaningful evidence.

Naturopathic research needs to take a stand and not succumb to the conventional setup of a hierarchy of evidence. The narrow view of the hierarchy predomi- nates current thinking in healthcare and seem to exclude other forms of empirical information. Empirical evidence ranges from single observation of individual patients to large clinical trials( 21 – 22) . Rather than a hierarchy, the types and levels of evidence must be repositioned to reflect the naturopathic principles and practice.

Innovative design & methods

The proper research design and methodology is crucial to obtaining sound evidence. A design is used to structure the research, to show how all of the major parts of the research project, (i.e. the samples or groups, measures, treatments, and methods of assignment) work together to try to address the central research question(s).

There are two broad categories of design, namely, quantitative and qualitative (23) . Quantitative methods examine a phenomenon through the numerical representation of observations and statistical analysis. Most of us are familiar with the RCT. But the RCT is just one of a number of quantitative designs and the RCT can only be used for very narrow questions of cause and effect, which may not always be applicable. In general, quantitative designs can be:

  1. Descriptive, which is concerned with describing general characteristics of an event, such as person, place, time; e.g. cross-sectional studies, case reports and case series; or
  2. Analytic, which focuses on testing of hypothesis, and they can be
    1. Observational studies, including case-control, cohort studies, clinical trials; and
    2. Intervention studies such as RCTs.

Qualitative studies involve detailed, verbal descriptions of characteristics, cases, and settings. Qualitative research typically uses observation, interviewing, and document review to collect data. Some unique charac- teristics of qualitative research are that it takes place in a natural setting, uses multiple methods for data collection, is interpretive and views a phenomenon holistically.

This brief overview of design and methods is meant to expose the variety of tools at our disposal for conducting research and obtaining evidence in naturopathic medicine. Thus, the issue of methodological aptness is more important than aiming for a single gold standard. Recently, other designs have been given greater attention for studying complex systems such as naturopathic medicine.

Mixed methods offer a promising methodological approach to studying complex outcomes such as those seen in naturopathic practice. This type of design integrates one or more qualitative and quantitative techniques for data collection and/or analysis (24) . Whereas quantitative methods may work best in isolating and identifying the association of variables at specific moments in time, qualitative techniques are particularly good at gaining insight into the processes and experiences of the participants (25) . For example, to examine the impact of a naturopathic treatment on daily life, we may use a quantitative measure on quality of life (QOL) and use qualitative techniques to gain insights into a patient’s experience in adhering to the treatment plan. Mixed methods can provide the opportunity for blending research traditions and give the investigator additional insights that go beyond the scope of any single technique.

Another type of design involves the study of single case or subject. These designs are known as single-case studies, single-subject designs, single-system strategies, time-series experimentation and n-of-1 studies. This method can be applied to an individual or small group, whereby multiple sources of data can be used to bring out a depth of details from the partici- pants. An advantage to this form of inquiry is that it can be incorporated into clinical practice as a useful way for clinicians to document individualized outcomes (26) . The versatility in design strategy makes this a viable option to studying the complex nature of the naturopathic treatments and outcomes.

Naturally, there are theoretical and practical challenges associated with these methods. However, these methods may allow us to develop potential studies in creative and integrated ways that transcends the multimodality, individualized nature of naturopathic care.

The role of evidence

Lastly, the role of evidence is considered. Those who are doing research in naturopathic medicine or other nonconventional fields must ask, to whom are the findings presented (27) ? That is, are we doing research solely to convince others the merits of our therapies? Or, is it to educate and inform ourselves as practitioners to improve upon our knowledge and clinical skills? The criticism from the conventional side has been that nonconventional practices, such as naturopathic medicine, are non-proven or lacking in evidence. As well, there have been some misgivings by those in the nonconventional fields who see the drive for research as jostling for control. After all, once a therapy is proven to work, according to preset standards, then it becomes mainstream. So where lies the "truth" in the drive for research and need for evidence? As a profession, naturopathic medicine has to consider what is worthy, useful knowledge.

Naturopathic medicine is in a unique situation, in that it encompasses a variety of modalities for healing at its disposal. Thus, naturopathic medicine must take a leading role to develop strategies for research, rather than follow the standards set by the conventional model. In order to fully incorporate the naturopathic perspective into obtaining evidence, challenges to what is considered as credible evidence may be met by using innovative research designs and methods.

Furthermore, for naturopathic researchers, some pertinent questions that need to be asked are:

  1. Is it a question of "us versus them" in the push to conduct research? Who should gauge what is credible evidence?
  2. Is the drive for evidence based practice a matter of scientific necessity or a philosophical demand? What is the framework for reference in doing research?
  3. Can we really measure a phenomenon by simply quantifying it, e.g. phenomena such as pain, energy, qi, sense of well-being, which are not truly measurable scientifically, but are important as barometer for gauging patient health status?
  4. How do we accommodate the need to provide individualized protocols with the conventional belief for standardization and generalization? Or should we?

Conclusion

The concept of evidence utilization in practice is not new, but the overemphasis on specific types of evidence may oversimplify the complex and interper- sonal nature of patient care (14) . Practitioners are encouraged to seek a broad evidence base that requires the interaction between scientific and experien- tial sources (2) . The theory of disease and healing that underlies naturopathic philosophy and practice should determine the framework for developing methods to measure appropriate outcomes (13) . In order to engage in meaningful debate, documentation of the evidence must be available in a thorough and accurate fashion. As a professional body with standardized training, naturopathic medicine need to define what is the evidence, where the evidence comes from and how the evidence is used. In turn, the research will reflect the essence of the therapies used in practice and meet the challenge of providing rigorous, sound evidence to add to the knowledge base and to move the profession forward.

References

1. Carter B. Methodological issues and comple mentary therapies: researching intangibles? Compl Ther Nurs Midwifery. 2003 Aug;9(3):133-9.

2. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B., What counts as evidence in evidence-based practice? J Adv Nurs. 2004 Jul;47(1):81-90.

3. Tonelli MR, Callahan TC., Why alternative medicine cannot be evidence-based. Acad Med. 2001 Dec;76(12):1213-20.

4. Feinstein AR, Horwitz RI., Problems in the "evidence" of "evidence-based medicine". Am J Med. 1997 Dec;103(6):529-35.

5. Richardson J. The use of randomized control trials in complementary therapies: exploring the issues. J Adv Nurs. 2000 Aug;32(2):398-406.

6. Jonas WB. Magic and methodology: when paradigms clash. J Altern Complement Med. 1999 Aug;5(4):319-21.

7. Jonas WB. The evidence house: how to build an inclusive base for complementary medicine. West J Med. 2001 Aug;175(2):79-80.

8. Calabrese, C. Clinical research in naturopathic medicine. In Clinical Research in Complementary Therapies. Lewith G, Jonas WB, Walach H. (Ed.) Churchill Livingstone:Toronto. 2002. pgs. 345-362.

9. Raschetti R, Menniti-Ippolito F, Forcella E, Bianchi C. Complementary and alternative medicine in the scientific literature. J Altern Complement Med. 2005 Feb;11(1):209-12.

10. Hyland ME. Methodology for the scientific evaluation of complementary and alternative medicine. Complement Ther Med. 2003 Sep;11(3):146-53.

11. Gupta M. critical appraisal of evidence-based medicine: some ethical considerations. J Eval Clin Pract. 2003 May;9(2):111-21.

12. Sackett DL, Richardson WS, Rosenberg WM, & Haynes RB. Evidence based medicine; what it is and what it isn’t. BMJ. 1996, 312: 71-72.

13. Walach H, Jonas, WB, Lewith, GT. The role of outcomes research in evaluating CAM. Altern. Ther. May/June, 2002, 8(3): 88-95.

14. Williams DD, Garner J. The case against "the evidence": a different perspective on evidence- based medicine. Br J Psychiatry. 2002 Jan;180:8-12.

15. Hampton JR. Guidelines–for the obedience of fools and the guidance of wise men? Clin Med. 2003 May-Jun;3(3):279-84.

16. Mason S, Tovey P, Long AF. Evaluating comple mentary medicine: methodological challenges of randomised controlled trials. BMJ. 2002 Oct 12;325(7368):832-4.

17. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000 Jun 22;342(25):1887-92.

18. Benson K, Hartz AJ. A comparison of observa tional studies and randomized, controlled trials Am J Ophthalmol. 2000 Nov;130(5):688.

19. Brighton B, Bhandari M, Tornetta P 3rd, Felson DT. Hierarchy of evidence: from case reports to randomized controlled trials. Clin Orthop Relat Res. 2003 Aug;(413):19-24.

20. Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003 Jul;57(7):527-9.

21. Katz DL, Sabina AB, Girard C, Adelson H, Schiller-Liberti, Williams AL. Teaching evidence based integrative medicine: description of a model program. Evidence-Based Integrative Medicine. 2003, 1(1): 77-82.

22. Katz DL, Williams AL, Girard C, Goodman J, Comerford B, Behrman A, Bracken MB. The evidence base for complementary and alternative medicine: methods of Evidence Mapping with application to CAM. Altern Ther Health Med. 2003 Jul-Aug;9(4):22-30.

23. Creswell JW. Research Design: qualitative, quantitative, and mixed methods approaches. 2nd Ed. Sage Publications: Thousand Oaks. 2003.

24. Creswell JW, Fetters MD, Ivankova NV. Designing a mixed methods study in primary care. Ann Fam Med. 2004 Jan-Feb;2(1):7-12.

25. Borkan JM. Mixed Methods Studies: A Foundation for Primary Care Research Ann Fam Med. 2004, 2(1):4-6.

26. Backman CL, Harris SR. Case studies, single- subject research, and N of 1 randomized trials: comparisons and contrasts. Am J Phys Med Rehabil. 1999 Mar-Apr;78(2):170-6.

27. Sheldon TA, Guyatt GH, Haines A. Getting research findings into practice. When to act on the evidence. BMJ. 1998 Jul 11;317(7151):139- 42.