OIG Reports that Clinical Trials Lack Diverse Subjects: What Role Can Artificial Intelligence Play?

As the largest public funder of biomedical research in the world, the National Institutes of Health (“NIH”) annually funds over $38 billion in extramural research, including about $6 billion for clinical trials. On May 28, 2024, OIG published a report (“OIG Report”) summarizing its finding that, of the NIH‑funded clinical trials randomly selected by OIG for review, a majority failed to meet NIH’s requirements for inclusion of diverse trial participants.

As reiterated in the OIG Report, the 1993 NIH Revitalization Act recognizes that diversity in clinical trial enrollment is crucial for closing health disparities and for producing research that is scientifically generalizable. Accordingly, NIH requires biomedical researchers seeking NIH funding for clinical trials to submit, as part of their grant applications, enrollment plans that justify enrollment targets broken down by sex, race, and ethnicity, unless they can present some clear and compelling rationale (generally excluding cost considerations) as to why inclusion of certain groups (children, minority groups, and women, including pregnant women) is inappropriate. More recently, pursuant to the 21st Century Cures Act, NIH also monitors compliance with inclusion enrollment plans through periodic reports submitted by researchers to Clinicaltrials.gov, which must include valid analyses of outcomes by sex or gender and race and/or ethnicity.

Nonetheless, as the OIG Report reflects, NIH has had limited success in improving enrollment of diverse subjects. One of OIG’s recommendations to NIH is to “develop additional ways of supporting researchers in meeting inclusion enrollment targets.” Per the OIG Report, “[c]ommon hurdles for diverse enrollment include increased costs (e.g., providing translated informed consent for patients with limited English proficiency or pregnancy tests for females), lack of outreach to diverse communities about local clinical trials, and implicit biases, among other barriers.”

Use and Regulation of AI in Clinical Trial Enrollment

Artificial intelligence (“AI”) is already being explored for its potential to improve clinical trial design and management and shows promise for patient recruitment, such as by optimizing site selection and creating more targeted engagement initiatives. For example, during a recent Food & Drug Administration (“FDA”) podcast, the Director of the Office of Medical Policy within the FDA’s Center for Drug Evaluation and Research noted that, “AI [is] being used to assist in recruitment, and is being really developed and used … to more effectively connect individuals as part of the trials. This can … involve mining vast amounts of data from diverse sources … including social media, medical literature, registries, and structured and unstructured data in electronic health records.” However, as numerous studies have shown, bias in AI—including underrepresentation of certain populations in the data sets used to train the AI—can further exacerbate inequities.

In a development that could arguably help ameliorate problems of bias in clinical research, on May 6, 2024, the United States Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) published a final rule, “Nondiscrimination in Health Programs and Activities,” 89 Fed. Reg. 37522 (“Final Rule”). The Final Rule, codified at 45 C.F.R. § 92.210 and effective May 1, 2025, prohibits specified covered entities from discriminating on the basis of race, color, national origin, sex, age, or disability in certain “health programs or activities” through the use of “patient care decision support tools,” and expressly addresses certain clinical research activities.

Pursuant to the Final Rule, covered entities include (1) recipients of Federal financial assistance, including any grant, loan, credit, subsidy, contract (other than a procurement contract but including a contract of insurance), or any other arrangement by which the federal government, directly or indirectly, provides assistance or otherwise makes assistance available; (2) HHS; and (3) any health benefit exchange created under Title I of the Affordable Care Act (“ACA”). As described in more detail, below, a “health program or activity” is defined to span clinical care, health insurance coverage and health education activities, expressly includes clinical research, and quite broadly extends to “all of the operations of any entity principally engaged” in one of the defined health programs or activities.

The Final Rule also defines “patient care decision support tool” as “any automated or non‑automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision‑making in its health programs or activities,” such as clinical guidance produced from an algorithm or risk adjustment modeling. This definition is very broad and encompasses any tools in a provider or payer’s health programs and activities that assist with screening, predicting risk, diagnosing, treating, planning, allocating resources, determining eligibility, conducting utilization review, or health care operations that “affect the patient care that individuals receive.”

Potential Implications of the Final Rule for Clinical Research

Importantly, the Final Rule explicitly includes clinical research within the definition of “health program or activity.” This indicates that even if, for example, a drug study is experimental in nature and offers no direct medical or curative benefit to its research subjects, if the treatment of research subjects under the study involves patient care decision support tools, the Final Rule’s anti‑bias standards would appear to apply. Commentary published in the Final Rule indicates that the potential for clinical research to impact “the most promising advances in patient care” and the involvement of human subjects (unlike in laboratory research) is a sufficient nexus to patient care for clinical research to be included. Notably, the Final Rule is not intended to alter the requirement that NIH‑funded clinical trials enroll diverse subjects unless doing so would be inappropriate.

With respect to the obligations imposed on covered entities using patient care decision support tools in its health program or activities, the Final Rule takes an approach consistent with the National Institutes of Standards and Technology’s AI Risk Management Framework, and imposes on covered entities:

  • An ongoing duty to “make reasonable efforts to identify uses of patient care decision support tools in its health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability,” and
  • A duty to “mitigate the risk of discrimination resulting from the tool’s use in its health programs or activities.”

This means that hospitals, health systems, provider entities, and payers now have an affirmative obligation to identify bias in any patient care decision support tools and to take reasonable steps to mitigate discrimination once the covered entity becomes aware, or if the covered entity should be aware, of the potential for discrimination resulting from use of such tools.

In assessing the sufficiency of a covered entity’s efforts, OCR may consider, among other factors:

  • The size and resources of the covered entity;
  • Whether the covered entity used the tool as intended by the developer and, as applicable, as approved by regulators;
  • Whether the developer informed the covered entity about relevant risks of bias; and
  • “[W]hether the covered entity has a methodology or process in place for evaluating the patient care decision support tools it adopts or uses, which may include seeking information from the developer, reviewing relevant medical journals and literature, obtaining information from membership in relevant medical associations, or analyzing comments or complaints received about patient care decision support tools.”

In addition, under the definition of “health program or activity,” where a covered entity is principally engaged in clinical research or another health care program or activity enumerated by the Final Rule, all uses of patient care decision support tools by the covered entity, across the entity, must comply with the Final Rule.

Notably, although the Final Rule would certainly capture academic medical centers and other recipients of NIH clinical trial funding, it would not necessarily capture clinical trials funded solely by industry sponsors and conducted, for example, by clinical research organizations, outside of an academic setting, unless they receive federal funding. Nonetheless, the Final Rule has the potential indirectly to shape the entire field of clinical research, as developers of AI for use in clinical trials may wish to create a product that is marketable to academic medical centers and other kinds of covered entities, regardless of funding source. Moreover, on May 29, the FDA announced that, in accordance with President Biden’s Executive Order 14110, it is planning to release AI guidance later this year, which would extend to all phases of drug development regulated by the FDA and can be expected to align with the Final Rule.

OCR is requesting additional comments, including whether to expand the scope of the Final Rule to include other decision support tools that are being used in covered entities’ health programs and activities that do not directly impact patient care and clinical decision‑making, but may violate Section 1557 of the ACA. For example, automated billing and coding decision support tools.

[View source.]