Everyone’s a Critic: FDA Under Fire for High Drug Approval Numbers

Lately, FDA has been subject to criticism on almost every front. A recent NY Times Op-Ed alleging political interference, the popular theory that FDA fueled the opioid crisis, and the quality and inspection concerns raised in the 2019 book Bottle of Lies are all emblematic of the recent and widespread criticism of FDA. Through all of this criticism, FDA just keeps continuing on its mission of “advancing the public health by helping to speed innovations that make medical products more effective, safer, and more affordable.” To that end, FDA approved 48 “novel drugs” in 2019—drugs never before approved or marketed in the United States. But, as discussed in a recent Wall Street Journal article, a July 2019 study questions whether FDA should be touting all of these new approvals. This is because, according to the study, FDA has routinely rushed approvals in an effort to meet an unofficial year-end deadline—and in doing so, has comprised patient safety.

The study, undertaken by professors from Harvard Business School, the University of Texas at Dallas, and the MIT Sloan School of Management, examines the global pattern of drug approvals in the last weeks of each year. In the U.S., they found that there is a rush of drug approvals in December and the week before Thanksgiving. And while this itself is not a problem, the concern arises from the significantly increased number of adverse events seen with these last-minute approvals. According to the study abstract:

Drugs approved in December and at month-ends are associated with significantly more adverse effects, including more hospitalizations, life-threatening incidents, and deaths. This pattern is consistent with a model in which regulators rush to meet internal production benchmarks associated with salient calendar periods: this type of “desk-clearing” behavior results in more lax review, which leads both to increased output and increased safety issues.

The study notes that the “December drugs” phenomenon has been reported before, but findings that more adverse events are associated with these “December drugs” is novel to this paper.

The study evaluated FDA’s approval of new drug applications (both NDAs and BLAs) from 1980 to 2016 on a weekly and monthly basis with data derived from the Drugs@FDA database. The authors found that there are roughly 80% more approvals in December than in any other month. Further, more approvals occur at the end of the month than any other time during any given month. Additionally, for each approved NDA, the authors identified and collected information on measures of post-market safety: reported adverse events, black box warnings, market withdrawal, and MedWatch reports. They then compared the number of these safety issues arising from drugs approved in December to those approved throughout the year.

The study results showed that “drugs approved in December have higher adverse events.” The authors found that 89% of their sample observations are associated with at least one reported adverse events. They also found that 24 percent of their sample observations were included in Medwatch, 35 percent associated with a black-box warning, and 3.4% are withdrawn after reaching the market. December drugs are 19% more likely to be included in Medwatch and 5.7% more likely to receive a black-box warning.

These results, in and of themselves, are not particularly concerning, but it’s not entirely clear how reflective they are of actual safety concerns. Because the term “adverse events” is nebulous, and because it typically encompasses broad categories of events that are correlated with—rather than definitively caused by—a drug, it is not alarming that 89% of samples were associated with adverse events. Further, as discussed below, the use of a black-box warning as a safety signal is a bit misplaced, as FDA has already assessed that safety concern. Leading to additional questions, the authors manually determined whether reported adverse events were associated with a specific drug when multiple were listed and determined whether the events were life-threatening, led to hospitalization, disability, or death. Additionally, the authors obtained information “on safety-related drug withdrawals following FDA approval” but provided little detail about this process. But relying on market withdrawals to signify a safety risk is presumptuous: drugs can be withdrawn from the market for all sorts of reasons, so unless FDA has formally determined that a drug was withdrawn for reasons of safety or efficacy, this metric could skew results. And because FDA typically makes such a determination in response to a request to do so, there are a whole host of products that have been withdrawn or discontinued for reasons that have not yet been fully assessed. The use of this parameter, therefore, raises additional questions.

Nonetheless, taking efforts to control for drug popularity, complexity, and other circumstances, the study authors concluded that FDA may be biased towards approval for these “December drugs.” The authors hypothesize that “informal performance benchmarks” focusing on quantity of drug approvals rather than quality, may bias FDA regulators toward approval. Noting that the number of drugs that are approved is immediately visible and that industry and patient groups advocate for approval rather than rejection while adverse events may take years to be realized, the authors posit that internal pressure to approve drugs is high.

As the study authors themselves recognize, one major issue in the study is the inability to assess the benefits of the approved products in comparison to the risks. But this assessment is a critical element of drug approval. This is precisely the reason that the existence of a boxed warning should not be “counted” as a signal of safety issues with approved products. Boxed warnings are the result of an assessment of the risk of the serious adverse event as compared to the benefit; indeed, in approving a product with a boxed warning, FDA is not only aware of the risks but has made an affirmative decision that the benefits of this potentially dangerous product outweigh the risks. As such, using the existence of a boxed warning to suggest that FDA rushed approvals to meet an informal agency deadline inherently ignores the careful calculation that FDA undertakes when approving a product with a boxed warning.

In fact, the study’s omission of benefit analysis completely ignores the real-life context of a product’s approval. While it is true that patient advocacy groups may be pushing FDA to approve a product that is associated with significant adverse events or risks, typically that signals that the patient benefit may outweigh the risks. Therefore, FDA’s approval of such a product is not necessarily to meet some metric or because of internal pressure; it may approve a product with safety risks notwithstanding the risks or safety signals because of there is great patient need or benefit. So while there may be more products approved in December and those products may be associated with more adverse events, the context in which these “less safe” products may be used is really important to why and how they got approval. It’s too easy to say that adverse events suggest that the products are not safe enough if you discount the benefits that such products provide.

Further, the authors’ desk-clearing hypothesis seems to ignore the bureaucratic approval process. They suggest delaying December approvals for reevaluation in January because December approvals are too rushed to make safe approval decisions, but this ignores the fact that the safety determinations required for approval may not even have occurred in December. It’s not as though one FDA employee looks at an application for the first time in December and decides to send off an approval at Christmas break; there are several layers of approval necessary for each drug product.

This paper seems to surmise that FDA is just really focused on raising its approval statistics, and that the default stance of regulators has shifted to approval rather than rejection now that about 60% of all NDAs are approved. But focusing on the publicly-available numbers alone is not enough to get the whole picture: the authors never discuss how many of this 60% of NDAs are approved in their second, third, fourth cycle of review. This would suggest that the default is not necessarily approval but working with an NDA sponsor to ultimately get to approval. And where the mission is to improve the public health and approve safe products, wouldn’t this approach make sense? Though the study makes some really interesting points, the implied criticism, in which the authors seemingly link safety risks to rushed approvals, simply underestimates the rigorous risk/benefit analysis and bureaucratic process that each approval undergoes. While no one is denying that more drugs may be approved in December or at the end of a given month, it’s definitely a leap to presume that the existence of additional adverse events for these products indicates a “rush” to approval comprising safety without at least a cursory review of the benefit-to-risk analysis.