CVS Health, 27 Others Pledge Safe AI Use And Development
In this article for Law360, Rob Kantrowitz was quoted regarding the upcoming legislative and regulatory developments of artificial intelligence in the healthcare industry.
CVS Health, Duke Health and 26 other healthcare providers and payors have pledged their commitment to the "safe, secure and trustworthy" use and purchase of artificial intelligence, according to a Thursday announcement by the Biden administration.
In the announcement, the White House said these voluntary commitments will help align industry action to ensure AI deployment in healthcare results in fair, appropriate, valid, effective and safe outcomes. These principles, known as FAVES, were established in a final rule released Wednesday by the U.S. Department of Health and Human Services to advance the exchange of health information across different systems, improve algorithm transparency and reduce burdens for users of health information technology.
"I think articulating the principles is very important and so is the commitment signed in this pledge, because everybody will say, 'We want to do trustworthy AI,'" Michael Pencina, Duke Health's chief data scientist, told Law360 on Thursday. "But the question is, What does it mean? What are the criteria or rubrics that you need to follow?"
In their pledge, the healthcare groups committed to transparency and safety measures, including by agreeing to adhere to a risk management model to address potential harms caused by AI applications and inform users when content they receive is largely AI-generated and not reviewed by a human. The healthcare groups also agreed to support responsible AI innovation, including by developing AI solutions that optimize healthcare delivery, expand access and affordability of care and reduce clinician burnout.
"The industry appears to be looking to get ahead of upcoming legislative and regulatory developments by adopting 'FAVES' principles that will likely set the parameters for upcoming federal action on AI," Robert Kantrowitz, a corporate healthcare partner at Kirkland & Ellis LLP, told Law360 in an emailed statement. "Stakeholders are likely getting the sense of where the legal landscape is heading in making efforts to adhere to these principles."
Pencina said a key component of the pledge is its inclusion of healthcare payors such as CVS Health and Premera Blue Cross.
"There have been instances where concern has been raised about payors using AI to deny care," Pencina said. "And I think that's the other side of it, which is really important — that we're all in it together."
"Everybody needs to adhere to the same principles," Pencina added.
According to Pencina, the pledge was the result of an October meeting between healthcare industry groups; government leaders from HHS' Office of the National Coordinator for Health Information Technology; the U.S. Food and Drug Administration's Digital Health Center of Excellence; the U.S. Department of Veterans Affairs; and the Centers for Medicare and Medicaid Services. Later that month, the Biden administration launched an executive order directing HHS to examine the potential harms and benefits of using AI in healthcare.
Federal agencies have also moved the needle in recent months on the safe deployment of AI in healthcare. The ONC released a final rule Wednesday establishing new standards for the certification of health information technology, including criteria aligned with Biden's executive order and the FAVES principles. Additionally, the FDA in October announced it had authorized 171 medical devices that use AI and machine learning, bringing its count of authorized AI/ML-enabled medical devices to nearly 700.
Meanwhile, since July, 15 technology companies including Google, Microsoft and IBM made a similar voluntary pledge to test the security of AI systems before release, report vulnerabilities in their systems and ensure users know when content is AI-generated.
"We must remain vigilant to realize the promise of AI for improving health outcomes. Healthcare is an essential service for all Americans, and quality care sometimes makes the difference between life and death," the White House said in Thursday's announcement. "Without appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best — and dangerous at worst."