It’s the rare policy question that unites Republican Gov. Ron DeSantis of Florida and the Democratic-led Maryland government against President Donald Trump and Gov. Gavin Newsom of California: How should health insurers use AI?
Regulating artificial intelligence, especially its use by health insurers, is becoming a politically divisive topic, and it’s scrambling traditional partisan lines.
Boosters, led by Trump, are not only pushing its integration into government, as in Medicare’s experiment using AI in prior authorization, but also trying to stop others from building curbs and guardrails. A December executive order seeks to preempt most state efforts to govern AI, describing “a race with adversaries for supremacy” in a new “technological revolution.”
“To win, United States AI companies must be free to innovate without cumbersome regulation,” Trump’s order said. “But excessive State regulation thwarts this imperative.”
Across the nation, states are in revolt. At least four — Arizona, Maryland, Nebraska, and Texas — enacted legislation last year reining in the use of AI in health insurance. Two others, Illinois and California, enacted bills the year before.
Legislators in Rhode Island plan to try again this year after a bill requiring regulators to collect data on technology use failed to clear both chambers last year. A bill in North Carolina requiring insurers not to use AI as the sole basis of a coverage decision attracted significant interest from Republican legislators last year.
DeSantis, a former GOP presidential candidate, has rolled out an “AI Bill of Rights,” whose provisions include restrictions on its use in processing insurance claims and a requirement allowing a state regulatory body to inspect algorithms.
“We have a responsibility to ensure that new technologies develop in ways that are moral and ethical, in ways that reinforce our American values, not in ways that erode them,” DeSantis said during his State of the State address in January.
Ripe for Regulation
Polling shows Americans are skeptical of AI. A December poll from Fox News found 63% of voters describe themselves as “very” or “extremely” concerned about artificial intelligence, including majorities across the political spectrum. Nearly two-thirds of Democrats and just over 3 in 5 Republicans said they had qualms about AI.
Health insurers’ tactics to hold down costs also trouble the public; a January poll from KFF found widespread discontent over issues like prior authorization. (KFF is a health information nonprofit that includes KFF Health News.) Reporting from ProPublica and other news outlets in recent years has highlighted the use of algorithms to rapidly deny insurance claims or prior authorization requests, apparently with little review by a doctor.
Last month, the House Ways and Means Committee hauled in executives from Cigna, UnitedHealth Group, and other major health insurers to address concerns about affordability. When pressed, the executives either denied or avoided talking about using the most advanced technology to reject authorization requests or toss out claims.
AI is “never used for a denial,” Cigna CEO David Cordani told lawmakers. Like others in the health insurance industry, the company is being sued for its methods of denying claims, as spotlighted by ProPublica. Cigna spokesperson Justine Sessions said the company’s claims-denial process “is not powered by AI.”
Indeed, companies are at pains to frame AI as a loyal servant. Optum, part of health giant UnitedHealth Group, announced Feb. 4 that it was rolling out tech-powered prior authorization, with plenty of mentions of speedier approvals.
“We’re transforming the prior authorization process to address the friction it causes,” John Kontor, a senior vice president at Optum, said in a press release.
Still, Alex Bores, a computer scientist and New York Assembly member prominent in the state’s legislative debate over AI, which culminated in a comprehensive bill governing the technology, said AI is a natural field to regulate.
“So many people already find the answers that they’re getting from their insurance companies to be inscrutable,” said Bores, a Democrat who is running for Congress. “Adding in a layer that cannot by its nature explain itself doesn’t seem like it’ll be helpful there.”
At least some people in medicine — doctors, for example — are cheering legislators and regulators on. The American Medical Association “supports state regulations seeking greater accountability and transparency from commercial health insurers that use AI and machine learning tools to review prior authorization requests,” said John Whyte, the organization’s CEO.
Whyte said insurers already use AI and “doctors still face delayed patient care, opaque insurer decisions, inconsistent authorization rules, and crushing administrative work.”
Insurers Push Back
With legislation approved or pending in at least nine states, it’s unclear how much of an effect the state laws will have, said University of Minnesota law professor Daniel Schwarcz. States can’t regulate “self-insured” plans, which are used by many employers; only the federal government has that power.
But there are deeper issues, Schwarcz said: Most of the state legislation he’s seen would require a human to sign off on any decision proposed by AI but doesn’t specify what that means.
The laws don’t offer a clear framework for understanding how much review is enough, and over time humans tend to become a little lazy and simply sign off on any suggestions by a computer, he said.
Still, insurers view the spate of bills as a problem. “Broadly speaking, regulatory burden is real,” said Dan Jones, senior vice president for federal affairs at the Alliance of Community Health Plans, a trade group for some nonprofit health insurers. If insurers spend more time working through a patchwork of state and federal laws, he continued, that means “less time that can be spent and invested into what we’re intended to be doing, which is focusing on making sure that patients are getting the right access to care.”
Linda Ujifusa, a Democratic state senator in Rhode Island, said insurers came out last year against the bill she sponsored to restrict AI use in coverage denials. It passed in one chamber, though not the other.
“There’s tremendous opposition” to anything that regulates tactics such as prior authorization, she said, and “tremendous opposition” to identifying intermediaries such as private insurers or pharmacy benefit managers “as a problem.”
In a letter criticizing the bill, AHIP, an insurer trade group, advocated for “balanced policies that promote innovation while protecting patients.”
“Health plans recognize that AI has the potential to drive better health care outcomes — enhancing patient experience, closing gaps in care, accelerating innovation, and reducing administrative burden and costs to improve the focus on patient care,” Chris Bond, an AHIP spokesperson, told KFF Health News. And, he continued, they need a “consistent, national approach anchored in a comprehensive federal AI policy framework.”
Seeking Balance
In California, Newsom has signed some laws regulating AI, including one requiring health insurers to ensure their algorithms are fairly and equitably applied. But the Democratic governor has vetoed others with a broader approach, such as a bill including more mandates about how the technology must work and requirements to disclose its use to regulators, clinicians, and patients upon request.
Chris Micheli, a Sacramento-based lobbyist, said the governor likely wants to ensure the state budget — consistently powered by outsize stock market gains, especially from tech companies — stays flush. That necessitates balance.
Newsom is trying to “ensure that financial spigot continues, and at the same time ensure that there are some protections for California consumers,” he said. He added insurers believe they’re subject to a welter of regulations already.
The Trump administration seems persuaded. The president’s recent executive order proposed to sue and restrict certain federal funding for any state that enacts what it characterized as “excessive” state regulation — with some exceptions, including for policies that protect children.
That order is possibly unconstitutional, said Carmel Shachar, a health policy scholar at Harvard Law School. The source of preemption authority is generally Congress, she said, and federal lawmakers twice took up, but ultimately declined to pass, a provision barring states from regulating AI.
“Based on our previous understanding of federalism and the balance of powers between Congress and the executive, a challenge here would be very likely to succeed,” Shachar said.
Some lawmakers view Trump’s order skeptically at best, noting the administration has been removing guardrails, and preventing others from erecting them, to an extreme degree.
“There isn’t really a question of, should it be federal or should it be state right now?” Bores said. “The question is, should it be state or not at all?”
Do you have an experience navigating prior authorization to get medical treatment that you’d like to share with us for our reporting? Share it with us here.


