top of page

Pandemics, Decision Making and

Evidence-based Medicine

As stewards of medical care, clinicians are often faced with making medical decisions absent adequate information. Indeed, uncertainty is a clinician’s constant companion. An uneasy relationship, uncertainty can manifest as indecision, trepidation, delay, or as the unyielding pursuit of its inverse — certainty. In the context of COVID-19, uncertainty abounds and, as is often the case, our response has been to seek greater certainty. 

 

In their seminal work on the topic of uncertainty in medical care, Fox describes three types of uncertainty. The first derives from an incomplete or imperfect mastery of available knowledge, the second is dependent upon the limitations of current medical knowledge and the third “…consists of difficulty in distinguishing between personal ignorance or ineptitude and the limitations of present medical knowledge.”

In medicine and public health, we try to fill gaps in knowledge through research. Evidence-based medicine (EBM) attempts to bridge the distance between medical research and clinical practice. EBM is the contentious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients by integrating best available evidence, clinical expertise, and patient values. EBM prioritizes three epistemological principles:

1) the practice of medicine should employ the best available evidence, with the knowledge that not all evidence is created equal (e.g., the hierarchy of evidence, where randomized controlled trials are considered better than case studies);

2) examining the totality of evidence, and not selecting evidence that favors a particular outcome or claim;

and 3) clinical decision making must include the patient’s values and preferences.

In response to the COVID-19 pandemic, clinicians and public health decision makers called for best evidence to inform the decisions at hand, leading to recommendations on social-distancing mandates, use of personal protective equipment, the implementation of effective treatments for the novel pathogen, and allocating ventilators to patients with COVID-19.

During a pandemic, the discrete disciplinary lines between clinical practice and public health blur, and all sources of data — individual level and population level data — begin to drive decisions. 

As a researcher, I am interested in how evidence is generated and prioritized for use in decision making. What intrigues me most as a public health scientist is the role of “evidence” at this critical moment; what we can learn from it, and what is next as we consider moving through the immediate aftermath of the onset of COVID-19 in the United States.  

Best Evidence and Pandemic Exceptionalism:

A Case Study

In March 2020, a study was initiated of azithromycin and hydroxychloroquine, a drug currently used to treat patients with rheumatoid arthritis and systemic lupus erythematosus, in order to assess efficacy to treat COVID-19. The authors published interim findings in late March, reporting that “…hydroxychloroquine treatment is significantly associated with viral load reduction/ disappearance in COVID-19 patients and its effect is reinforced by azithromycin.” The study was quickly picked up by media outlets and disseminated widely. 

A subsequent commentary on the study by a group of rheumatologists stated that, “Given the urgency of the situation, some limitations of this study may be acceptable, including the small sample size, use of an unvalidated surrogate endpoint, and lack of randomization or blinding. However, other methodological flaws also noted by others may affect the validity of the findings, even in the current setting, where an efficacious treatment is desperately needed.”

The extraordinary speed with which this study was conducted and reported (under one month) is atypical for clinical research. The publication of a study with such significant methodological flaws is not necessarily atypical, however the rationale for the permissibility of such flaws is of concern. The research team and commentary authors seem to “exceptionalize” pandemic research, indicating that some shortcuts are necessary given the circumstances. 

London and Kimmelman advance an argument against pandemic research exceptionalism, advocating for the maintenance of rigorous scientific and ethical standards in the context of the COVID-19 pandemic. Their analysis of clinical trials data indicates that within 6 weeks of the study publication, ~75,000 patients had been registered for testing various hydroxychloroquine regimens for COVID-19. “This massive commitment concentrates resources on nearly identical clinical hypotheses, creates competition for recruitment, and neglects opportunities to test other clinical hypotheses.” 

In addition to the opportunity cost associated with choosing to study this hypothesis over others, and the exposure of ~75,000 subjects to a drug with potential risks and harms, the attention to this line of inquiry has led to increased demand for hydroxychloroquine.  The drug was designated as “currently in shortage” by the Food and Drug Administration as of March 31, 2020.  It is fair to ask whether the attention paid to this drug has secondary impacts on the delivery of care to lupus and rheumatoid arthritis patients who depend on the drug. 

Stacey Springs

PhD, Research Associate, Brown University School of Public Health

Reframing Evidence and Uncertainty

In an effort to provide evidence to inform decision making, we cannot forget why it is we pursue evidence — to inform. The hydroxychloroquine case study illustrates the need for enduring commitment to the epistemological principles that underlie EBM and to rigorous, replicable research to inform policy and practice. 

This case study also reaffirms a persistent truth, affirmed by the Institute of Medicine in 2011 that “…clinicians must accept uncertainty and the notion that clinical decisions are often made with scant knowledge of their true impact.”  EBM concedes that despite our rigorous tools of evaluation and methodical approach to synthesizing evidence, we cannot ever be certain of the effects of a given treatment or the power of a diagnostic test. This is especially painful to hear in the midst of a pandemic which relies so heavily upon diagnostic testing. Yet this notion of never being completely certain comports with our understanding, and acceptance, that scientific knowledge is never complete and ultimately fallible. Thus, uncertainty is embedded, but not always prominently featured, in our approaches to evidence-based medicine. 

Perhaps now is the right time to finally accept uncertainty as a constant companion in clinical practice and public health decision making and not in conflict with our notions of evidence-based medicine. Perhaps it’s time to consider EBM among a complement of modes of acknowledging, managing, and effectively handling uncertainty with patients and communities, not for them. 

Complex Systems Approach and the Role

of Evidence Synthesis 

Increasingly, there have been calls for public health to shift toward a “fifth wave” acknowledging that “…the public health community is dealing not with simple systems that can be predicted and controlled, but complex adaptive systems with multiple points of equilibrium that are unpredictably sensitive to small changes within the system.”

 

EBM relies on a complement of tools which have predominantly focused on pursuing comparative effectiveness questions, that is, whether a particular treatment works for a particular population. These include familiar research tools, such as systematic reviews with or without meta-analyses which are grounded in linear models of causality, pursuing goals of certainty and predictability.

In these ways, EBM is particularly good at ameliorating the second type of uncertainty described by Fox as a means of correcting the deficit in medical knowledge. However,

in the current healthcare research paradigm, we often have the right answers to the wrong questions.  Complex systems approaches ask us to reframe research questions to interrogate whether interventions interact with and impact the healthcare system, rather than whether a particular intervention works.

So, if we adopt a complex systems approach to public health, must we shed our alignment with EBM? Not necessarily. 

The process which underlies the identification of best evidence often provides a richer yield than simply discerning which among the studied interventions has superior effectiveness. These processes of reviewing the evidence are formalized as the methods which underpin evidence synthesis — the compilation and integration of data derived from various sources to summarize and interpret existing knowledge, distribution, and gaps in evidence and the contextualization of evidence. 

Evidence synthesis has the potential to support discerning between meta-cognition (knowing what we don’t know) and meta-ignorance (not knowing what we do not know) which is at the heart of Fox’s third type of uncertainty. Evidence synthesis reviews document the availability and distribution of evidence in a particular field, revealing which interventions and comparators have been studied in a particular population, and which outcomes have been measured to assess these interventions. Evidence reviews often include an assessment of the quality and rigor of the existing evidence. Thus, we can document what has been studied and whether these studies are applicable to the current decision and are of sufficient quality to implement. The elucidation of gaps in evidence, where no studies exist or where studies do exist but are of insufficient quality to implement, can prioritize research agenda setting and funding. Evidence mapping techniques, rapid reviews, and scoping reviews are of particular import for this purpose. 

 

Complex systems approach engages interdisciplinary expertise and cross-sector collaboration to identify methods to design, implement, and evaluate interventions for changing these systems to improve public health. 

We often think of translating research into practice as a sequential process, each discrete phase building upon one another ultimately culminating into a patient care intervention, vetted and ready for use in the clinician’s toolbox. The COVID-19 crises demanded simultaneous — not sequential — action across scientific disciplines; basic scientists were asked to elucidate the genetic signature of the disease, clinical researchers were asked to move old and new pharmacotherapies and biological agents into early phase clinical trials, and public health researchers were asked to provide epidemiologic and modeling data on the spread of disease and predict mortality in our communities. 

Now, this would be considered interdisciplinary, as we had different disciplines within medicine represented. But, if we intend to create systems change, we must do better than check boxes on interdisciplinarity. The COVID-19 response required more of us. We called upon engineers and manufacturing sectors within and outside of academia to design and deliver ventilators and PPEs. We called upon artists to leverage their talents and skills to facilitate health communication and reduce social isolation for quarantined populations. 

 

When evidence synthesis is implemented by a truly (some may call it wildly) interdisciplinary team which includes robust and meaningful community engagement, evidence synthesis can also facilitate “meaning-making” at the nexus of a critical and complex public health issue. Through the lens of evidence synthesis — identifying, selecting, analyzing, and synthesizing the academic literature — groups can negotiate interpreting this literature and co-created narratives emerge. We can better understand

how evidence fits or fails within systems. 

In the immediate aftermath of COVID, we have an opportunity to reframe and reimagine how evidence is used to inform decision making to improve health. I would be remiss if I did not properly contextualize the magnitude of the moment. The COVID-19 crisis is juxtaposed with another co-occurring crisis within the US healthcare system; systemic racism. Structural, systemic, cultural and interpersonal racism persist in our country and have been identified as root causes of many health disparities

Golden and Wendel point out that “…the entrenchment of conventional, biomedical approaches leads to limited innovation of new methods, and continued use of inadequate practices. These practices generate multiple obstacles to health equity, including continued individual-level foci, culturally inappropriate practices, deficits-based interventions, under-representation, and failures to generate systems-level change.” Foucault acknowledged the interdependency of power/knowledge which in turn generate theory and practice. We must remain cognizant and observant that the primary studies we design become the evidence we select and implement to inform practice and generate theory. We must remain vigilant, as the hierarchical model of EBM may be complicit in perpetuating systems of inequity.

EBM and evidence synthesis can contextualize uncertainty and support decision making in these areas, but we will only reduce uncertainty if there is deep consideration of what constitutes evidence, what that evidence represents and what — and importantly who — it leaves out. 

Concluding Remarks

Clinicians and Public Health practitioners will always struggle with uncertainty. The reference point has long been the decision – and making the right one. However, COVID-19 has in some ways forced us to accept that decisions are made in a complex system, where each element is inextricably connected to another and where actions and reactions reverberate in unexpected ways. As we seek evidence to guide decision making, perhaps we must reconsider what we ask of it, and critically consider what it does for us.

bottom of page