Skip to main content
SearchLoginLogin or Signup

More Efficient and Effective Clinical Decision-Making

Published onJan 29, 2021
More Efficient and Effective Clinical Decision-Making
·
key-enterThis Pub is a Commentary on

Sabina Leonelli’s (2020) article “Data Science in Times of Pan(dem)ic” (this issue) makes a compelling case for the need to be strategic about the time and resources that are devoted to data science approaches to dealing with COVID-19. I appreciate her focus on the five “imaginaries” of population surveillance, predictive modeling, causal explanation, evaluation of logistical decisions, and identification of social and environmental need. In this comment I will propose a sixth imaginary that data science must address in order to successfully respond to COVID-19 or future public health emergencies: clinical decision making. First, I will describe two failings that have been highlighted by the pandemic: (1) randomized clinical trials (RCTs) have been too slow, and in some cases too sloppy, to reliably inform clinical practice, and (2) we lack a principled framework for incorporating evidence beyond RCTs into clinical practice. I will describe the state of affairs in the United States, which has had a singularly dysfunctional response to the pandemic, but many of the failings of the United States also apply to the European, World Health Organization (WHO), and global responses to COVID-19. Then I will briefly discuss some ways in which data science can address these failings going forward.

Clinical decision making, especially about pharmacotherapies, typically relies on highly regulated, time-consuming, and siloed clinical trials. This system works reasonably well for routine research and drug licensing, but has failed miserably in the face of a global public health emergency like COVID-19, where it has resulted in the worldwide proliferation of thousands of often redundant, rarely coordinated, frequently underpowered, and sometimes poorly executed trials. Early in the pandemic many authors argued for a globally coordinated approach to the quest for treatments for COVID-19 (Bassi & Hwenda, 2020; Dean et al., 2020; Gates, 2020; London & Kimmelman, 2020; Ogburn et al., 2020), emphasizing the need for data-sharing, collaboration, and the use of core protocols in order to develop reliable evidence about benefits and harms of potential treatments for COVID-19 through RCTs. But it is difficult to overstate the extent to which we have failed. While some RCTs for investigational treatments were centrally organized (the WHO Solidarity trial and the U.K. Recovery trial are two of the best examples), 2,500 COVID-19 interventional trials are registered on clinical trials.gov and the vast majority have no central organization. According to an unpublished but widely discussed Food and Drug Administration (FDA)-commissioned study, 90% of COVID-19 trials registered to clinicaltrials.gov are “inactionable,” meaning they are of insufficient quality to provide actionable evidence. News reports (e.g., Brodwin & Robbins, 2020; Johnson, 2020) described a research community in disarray, hampered by a vacuum of national leadership.

Arguably, it is the absence of decisive evidence earlier in the pandemic that left the door open for the politicization of hydroxychloroquine (HCQ) and other treatments that have been widely used despite the absence of gold-standard evidence of effectiveness, and sometimes even despite evidence of safety risks (Florko, 2020; Rowland, 2020). In the case of convalescent plasma, researchers and regulators still have not reached a consensus about whether or not it is an effective treatment. The fact that months have passed and hundreds of thousands of patients have been treated with convalescent plasma in the face of mixed evidence is as important an indicator of the failure of our clinical research infrastructure as the more obvious failures around HCQ. With adequate infrastructure and resources it would have been possible to definitively assess the efficacy of convalescent plasma in a matter of weeks.

In the absence of efficient, reliable RCT evidence, decision makers have sometimes resorted to other kinds of evidence to inform clinical practice, such as hypothesized mechanisms and causal analyses of observational data. This has led to ad hoc and opaque decision-making processes that are open to political and media influence. Even when such influence is applied for good, it is a troubling precedent for clinical decisions to be made in this way.

As a case study, consider the drug fluvoxamine. Evidence that fluvoxamine may be an effective treatment for COVID has steadily accumulated over the course of the pandemic. But, in the absence of a definitive RCT, individual clinicians are left to subjectively weigh the existing sources of evidence (if they are even aware of the potential role for fluvoxamine in treating COVID). Fluvoxamine is a selective serotonin reuptake inhibitor (SSRI) used to treat obsessive-compulsive disorder and depression. In 2019 researchers studying a different condition discovered that it can inhibit cytokine storms (Rosen et al., 2019), which are thought to cause severe and fatal COVID outcomes. Among SSRIs, all of which are sigma-1 receptor agonists, it is the strongest. In April 2020, researchers used proteomic analyses and in vitro experiments to show that the sigma-1 receptor plays an important functional role in COVID-19 infectivity (Gordon, Jang et al., 2020). In August 2020, an observational analysis of hospitalized patients in France found that being on SSRIs significantly reduced the risk of death (Hoertel et al., 2020). In November 2020, a small exploratory RCT found a dramatic protective effect of fluvozamine for outpatient COVID patients (Lenze et al., 2020), genetic analyses corroborated the importance of the sigma-1 receptor in COVID-19 infectivity (Gordon, Hiatt et al., 2020), and another observational study used a clever natural experiment to find a pronounced protective effect of SSRIs among hospitalized COVID patients (Gordon, Hiatt et al., 2020). At this point fluvoxamine was picked up in news stories (Schmidt, 2020) and a small number of clinicians began offering fluvoxamine to COVID-positive patients (Patterson & Berggren, 2020). In December 2020, researchers began enrolling into a large outpatient RCT (https://stopcovidtrial.wustl.edu/), the results of which will likely be definitive and could potentially precipitate a powerful shift in the treatment and prognosis of COVID patients. But it will likely take months to complete.

Fluvoxamine is safe, at least in the absence of counterindications; unlike HCQ there’s no risk of a shortage affecting people who rely on the drug for other conditions; and preliminary results are compelling. Some COVID activists with whom I’ve spoken believe that fluvoxamine should be prescribed to everyone who tests positive for COVID; the fact that it is not, even as thousands of Americans die of COVID every day, is a monumental ethical and health care failing. For some clinicians, the safety and the preliminary evidence point toward offering fluvoxamine as an option for patients (Patterson & Berggren, 2020). But for many clinicians, clinical researchers, and bioethicists, to lower our standards of evidence in the face of a raging pandemic is dangerous: for the precedents it sets, for population health, and for individuals taking other, similarly promising, but unproven medications with unknown interactions (London & Kimmelman, 2020; Seymour et al., 2020; Wilson, 2020). When evidence comes from the usual sources, that is, from large, well-run RCTs, there is a general consensus that statistical significance should guide clinical practice. But when evidence comes from these novel, disparate sources, there is no consensus about how to translate evidence into practice.

What can be done?

On the RCT front, funding agencies, institutional prioritization committees, and internal review boards (IRBs) must be more discerning. They must not fund or approve the poorly designed or underpowered studies that were deemed "inactionable" by the FDA study. They must consider the broader research landscape and decline to fund or approve studies of treatments already being adequately studied by other institutions and researchers. This means that they must have access to information about the broader research landscape; the reporting requirements of clinicaltrials.gov are far from sufficient to facilitate these kinds of considerations.

Journals, funders, and promotion committees must realign incentives to promote good science over personal glory. A researcher’s career should not suffer because they choose to participate in a well-run multisite trial over being the principal investigator of their own small trial, or because they contribute data to a reliable meta-analysis in lieu of publishing an underpowered analysis in a lead-author paper.

A federally funded investigator should be required to share their data in the service of public health—not after their flagship paper goes to press, but immediately. With thousands of people dying each day, the timing matters. When data are shared, we need systems in place to facilitate the aggregation of evidence across multiple RCTs answering similar clinical questions (Ogburn et al., 2020).

Not all of these solutions directly involve data science, but as stewards of quantitative evidence it is data scientists’ prerogative to advocate for all solutions. And underlying these solutions is the need for robust and flexible notions of quantitative evidence. Data scientists must play a role in empowering IRBs, prioritization committees, and funders to assess whether a proposed study fills an evidentiary gap. We should be at the table when a study is deemed actionable or inactionable. We are the ones who will have to build the pipelines to aggregate data from disparate studies and provide concise summaries of the evidence produced.

This is not the first time researchers and bioethicists have clamored for this kind of reform (National Academies of Sciences, Engineering, and Medicine et al., 2017). While I hope that it will be impossible to ignore the failures of our research infrastructure in the wake of COVID-19, I know that the current system is deeply entrenched and will be hard to unseat. This leaves the challenge of decision making in the absence of efficient and reliable RCT evidence. For this, we need data scientists to work with bioethicists, regulators, epistemologists, clinicians, and researchers to develop a framework for balancing, weighting, quantifying, and aggregating evidence across different domains such as animal models, biological and in vitro experiments, observational analyses, and exploratory RCTs, as in the fluvoxamine case study. Such a framework will facilitate informed decisions about when—or whether—a body of evidence can suffice to inform clinical practice in the absence of a definitive RCT.

Unlike other public health threats, the timescale on which pandemics wreak havoc is such that short delays in deciding to deploy treatments can have massive consequences. This may argue for a more overtly decision theory–based approach to clinical decision making than the traditional quest for statistical significance in an RCT. Some recent papers argue for a decision theoretic approach to RCTs (Manski & Tetenov, 2016) with specific application to the small, underpowered COVID- 19 RCTs that would be inactionable in a traditional statistical significance framework (Manski & Tetenov, 2020). Though it would be mathematically challenging, in principle a similar decision theoretic framework could be used to weight and aggregate multiple RCTs or even disparate sources of evidence. I am wary of adopting this or other novel approaches to clinical decision making without considered discussions among bioethicists, data scientists, regulators, and other stakeholders, but these are discussions that we must have before the next global pandemic arrives.


Disclosure Statement

Elizabeth L. Ogburn has no financial or non-financial disclosures to share for this article.


References

Bassi, L. L., & Hwenda, L. (2020). COVID-19: Time to plan for prompt universal access to diagnostics and treatments. The Lancet Global Health, 8(6), e756–e757. https://doi.org/10.1016/S2214-109X(20)30137-6

Brodwin, E., & Robbins, R. (2020, April 23). It’s noisy: Competing Covid-19 efforts could hamper progress, experts warn. Stat News. https://www.statnews.com/2020/04/23/competing-coronavirus-efforts-gilead-google-apple/

Dean, N. E., Gsell, P.-S., Brookmeyer, R., Crawford, F. W., Donnelly, C. A., Ellenberg, S. S., Fleming, T. R., Halloran, M. E., Horby, P., Jaki, T., Krause, P. R., Longini, I. M., Mulangu, S., Muyembe-Tamfum, J.-J., Nason, M. C., Smith, P. G., Wang, R., Henao-Restrepo, A. M., & De Gruttola, V. (2020). Creating a framework for conducting randomized clinical trials during disease outbreaks. NEJM, 382(14), 1366–1369. https://doi.org/10.1056/NEJMsb1905390

Florko, N. (2020, April 24). Why was an obscure federal bureaucrat involved in Trump’s emergency hydroxychloroquine authorization? Stat News. https://www.statnews.com/2020/04/24/why-rick-bright-involved-hydroxychloroquine/

Gates, B. (2020). Responding to Covid-19—A once-in-a-century pandemic? New England Journal of Medicine, 382(18), 1677–1679. https://doi.org/10.1056/NEJMp2003762

Gordon, D. E., Hiatt, J., Bouhaddou, M., Rezelj, V. V., Ulferts, S., Braberg, H., Jureka, A. S., Obernier, K., Guo, J. Z., Batra, J., Kaake, R. M., Weckstein, A. R., Owens, T. W., Gupta, M., Pourmal, S., Titus, E. W., Cakir, M., Soucheray, M., McGregor, M., . . . Krogan, N. J. (2020). Comparative host-coronavirus protein interaction networks reveal pan-viral disease mechanisms. Science, 370(6521), Article eabe9403. https://doi.org/10.1126/science.abe9403

Gordon, D. E., Jang, G. M., Bouhaddou, M., Xu, J., Obernier, K., White, K. M., O’Meara, M. J., Rezelj, V. V., Guo, J. Z., Swaney, D. L., Tummino, T. A., Hüttenhain, R., Kaake, R. M., Richards, A. L., Tutuncuoglu, B., Foussard, H., Batra, J., Haas, K., Modak, M., . . . Krogan, N. J. (2020). A SARS-CoV-2 protein interaction map reveals targets for drug repurposing. Nature, 583(7816), 459–468. https://doi.org/10.1038/s41586-020-2286-9

Hoertel, N., Rico, M. S., Vernet, R., Beeker, N., Jannot, A.-S., Neuraz, A., Salamanca, E., Paris, N., Daniel, C., Gramfort, A., Lemaitre, G., Bernaux, M., Bellamine, A., Lemogne, C., Airagnes, G., Burgun, A., & Limosin F. (2020). Association between SSRI antidepressant use and reduced risk of intubation or death in hospitalized patients with coronavirus disease 2019: A multicenter retrospective observational study. medRxiv. https://doi.org/10.1101/2020.07.09.20143339

Johnson, C. (2020, April 15). Chaotic search for coronavirus treatments undermines efforts, experts say. The Washington Post. https://www.washingtonpost .com/health/2020/04 /15 /coronavirus-treatment-cure-research-problems/?itid=ap_carolyny.%20johnson

Lenze, E. J., Mattar, C., Zorumski, C. F., Stevens, A., Schweiger, J., Nicol, G. E., Miller, J. P., Yang, L., Yingling, M., Avidan, M. S., Reiersen, A. M. (2020). Fluvoxamine vs placebo and clinical deterioration in outpatients with symptomatic COVID-19: A randomized clinical trial. JAMA, 324(22), 2292–2300. https://doi.org/10.1001/jama.2020.22760

London, A. J., & Kimmelman, J. (2020). Against pandemic research exceptionalism. Science, 368(6490), 476–477. https://doi.org/10.1126/science.abc1731

Manski, C. F., & Tetenov, A. (2016). Sufficient trial size to inform clinical practice. Proceedings of the National Academy of Sciences, 113(38), 10518–10523. https://doi.org/10.1073/pnas.1612174113

Manski, C. F., & Tetenov, A. (2020). Statistical decision properties of imprecise trials assessing COVID-19 drugs [tech. rep.]. National Bureau of Economic Research. https://doi.org/10.3386/w27293

National Academies of Sciences, Engineering and Medicine et al. (2017). Integrating clinical research into epidemic response: The Ebola experience. National Academies Press. https://doi.org/10.17226/24739

Ogburn, E. L., Bierer, B. E., Brookmeyer, R., Choirat, C., Dean, N. E., De Gruttola, V., Ellenberg, S. S., Halloran, M. E., Hanley Jr, D. F., Lee, J. K., Wang, R., & Scharfstein, D. O. (2020). Aggregating data from COVID-19 trials. Science, 368(6496), 1198–1199. https://doi.org/10.1126/science.abc8993

Patterson, J. E., & Berggren, R. E. (2020, December 23). Outpatient strategies for COVID-19 therapy: Clinical equipoise in defense of Pascal. Medscape. https://www.medscape.com/viewarticle/942949#vp_3

Rosen, D. A., Seki, S. M., Fernández-Castañeda, A., Beiter, R. M., Eccles, J. D., Woodfolk, J. A., & Gaultier, A. (2019). Modulation of the sigma-1 receptor–IRE1 pathway is beneficial in preclinical models of inflammation and sepsis. Science Translational Medicine, 11(478), Article eaau5266. https://doi.org/10.1126/scitranslmed.aau5266

Rowland, C. (2020, March 30). FDA authorizes widespread use of unproven drugs to treat coronavirus, saying possible benefit outweighs risk. The Washington Post. https://www.washingtonpost.com/business/2020/03/30/coronavirus-drugs-hydroxychloroquin-chloroquine/

Schmidt, C. (2020, December 14). These drugs might prevent severe COVID. Scientific American. https://www.scientificamerican.com/article/these-drugs-might-prevent-severe-covid1/

Seymour, C. W., Bauchner, H., & Golub, R. M. (2020). COVID-19 infection—Preventing clinical deterioration. JAMA, 324(22), Article 2300. https://doi.org/10.1001/jama.2020.21720

Wilson, F. P. (2020, November 17). Why not try melatonin, zinc, vit C? COVID-19 and Pascal’s wager. Medscape. https://www.medscape.com/viewarticle/941041


©2021 Elizabeth L. Ogburn. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?