Resource Pages

Monday, September 30, 2024

Antidiscrimination Law Meets AI—New Requirements for Clinicians, Insurers, & Health Care Organizations

Mello, M. M., & Roberts, J. L. (2024).
JAMA Health Forum, 5(8), e243397–e243397.

Responding to the threat that biased health care artificial intelligence (AI) tools pose to health equity, the US Department of Health and Human Services Office for Civil Rights (OCR) published a final rule in May 2024 holding AI users legally responsible for managing the risk of discrimination. This move raises questions about the rule’s fairness and potential effects on AI-enabled health care.

The New Regulatory Requirements

Section 1557 of the Affordable Care Act prohibits recipients of federal funding from discriminating in health programs and activities based on race, color, national origin, sex, age, or disability. Regulated entities include health care organizations, health insurers, and clinicians that participate in Medicare, Medicaid, or other programs. The OCR’s rule sets forth the obligations of these entities relating to the use of decision support tools in patient care, including AI-driven tools and simpler, analog aids like flowcharts and guidelines.

The rule clarifies that Section 1557 applies to discrimination arising from use of AI tools and establishes new legal requirements. First, regulated entities must make “reasonable efforts” to determine whether their decision support tools use protected traits as input variables or factors. Second, for tools that do so, organizations “must make reasonable efforts to mitigate the risk of discrimination.”

Starting in May 2025, the OCR will address potential violations of the rule through complaint-driven investigations and compliance reviews. Individuals can also seek to enforce Section 1557 through private lawsuits. However, courts disagree about whether private actors can sue for disparate impact (practices that are neutral on their face but have discriminatory effects).

---------------------

Here are some thoughts:

Addressing Bias in Healthcare AI: New Regulatory Requirements and Implications

The US Department of Health and Human Services Office for Civil Rights (OCR) has issued a final rule holding healthcare providers liable for managing the risk of discrimination in AI tools used in patient care. This move aims to address the threat of biased healthcare AI tools to health equity.

New Regulatory Requirements

The OCR's rule clarifies that Section 1557 of the Affordable Care Act applies to discrimination arising from the use of AI tools. Regulated entities must make "reasonable efforts" to determine whether their decision support tools use protected traits as input variables or factors. If so, they must mitigate the risk of discrimination.

Fairness and Enforcement

The rule raises questions about fairness and potential effects on AI-enabled healthcare. While the OCR's approach is flexible, it may create uncertainty for regulated entities. The rule applies only to organizations using AI tools, not developers, who are regulated by other federal rules. The OCR's enforcement will focus on complaint-driven investigations and compliance reviews, with penalties including corrective action plans.

Implications and Concerns

The rule may create market pressure for developers to generate and provide information about bias in their products. However, concerns remain about the compliance burden on adopters, particularly small physician practices and low-resourced organizations. The OCR must provide further guidance and clarification to ensure meaningful compliance.

Facilitating Meaningful Compliance

Additional resources are necessary to make compliance possible for all healthcare organizations. Emerging tools for bias assessment and affordable technical assistance are essential. The question of who will pay for AI assessments looms large, and the business case for adopting AI tools may evaporate if assessment and monitoring costs are high and not reimbursed.

Conclusion

The OCR's rule is an important step towards reducing discrimination in healthcare AI. However, realizing this vision requires resources to make meaningful compliance possible for all healthcare organizations. By addressing bias and promoting equity, we can ensure that AI tools benefit all patients, particularly vulnerable populations.