As healthcare marketers adopt AI tools to nearly every workflow, the future of the technology may be linked to potential regulations around it.

Nearly 80% of Americans believe the federal government should implement stricter AI regulations, according to a recent report from Authority Hacker. But the U.S. is far away from implementing policy that’s as specific and nuanced as needed, experts believe.

“The current standards are minimal,” said Authority Hacker co-founder Mark Webster. “It’s a real problem if you’re trying to develop a tool or work with this technology, because it’s kind of the Wild West at the moment. You don’t don’t know what you can and can’t do.”

The survey, which examined 2,000 people, sought to pinpoint how they’re feeling about the changing AI landscape across a variety of sectors, including healthcare.

Eighty-two percent of respondents said privacy was a “major concern” and that they were worried about the use of personal data in training AI systems. The survey dubbed this feeling “AI anxiety.”

Last year, President Joe Biden released an executive order calling for the development of initial guidelines around the “Wild West” of AI. It included specific requests for guardrails around the use of AI in healthcare, including a mandate for the Department of Health and Human Services to develop responsible AI standards. The order also requires companies to notify the federal government if they create an AI model that could pose a national security or public health risk.

“It’s making it clear at a very basic level what people are not allowed to do with the technology — you can’t invent a killer robot, or these extreme examples,” Webster explained, before adding, “There are a lot of day-to-day mundane cases where it’s just not clear.” He noted that granular implementation of Biden’s order is unlikely to occur anytime in the near future.

“We’re definitely fighting an uphill battle here,” Webster continued. “The pace of development with AI is such that it’s hard to keep up with the technology, let alone legislate it. The speed with which the majority of governments around the world move makes it a real challenge to keep up.”

Beyond Biden’s executive order, lawmakers have sought to introduce legislation in Congress regulating AI. This includes the Health Technology Act of 2023, which attempts to assess whether machine learning tools could ultimately prescribe drugs to patients.

Similarly, an emerging patchwork of state laws are designed to regulate AI. Massachusetts’ H1974 bill, for instance, would require mental health professionals who use AI in their practices to secure approval from a licensing board. The bill would also require these providers to disclose their use of AI to their patients and seek informed consent.

But the majority of state and federal laws targeting AI in healthcare are “still in the proposal stage,” according to Holistic AI.

Then there’s the Food and Drug Administration, which is already tasked with monitoring AI in medical devices. The agency has noted that its traditional device regulation wasn’t designed for ever-changing AI models, however.

In a fireside chat earlier this year, FDA commissioner Robert Califf said the agency has thus far managed to create a “good scheme” to regulate AI algorithms in pacemakers, defibrillators and other devices.

But the FDA’s power to regulate AI in clinical practice and elsewhere is limited: Congress would need to enact statutory changes to give the agency that power. Meanwhile, the FDA would have to triple or quadruple the size of its staff in order to “take on” the expected burden of AI regulation beyond medical devices, Califf said.

In the absence of clearer federal laws, Califf has hinted at the FDA working with external and industry partners to sort out some of these regulatory questions. “We’ve got to have a community of entities that do the assessments in a way that gives the certification of the algorithms actually doing good and not harm,” he noted.

Webster hopes to see specific regulations around data privacy, bias and AI accountability in healthcare.

During a recent press call, Vice President Kamala Harris highlighted concerns about AI bias. She pointed to the Department of Veterans Affairs — which is already testing AI in its medical centers — and noted that if the agency wants to use AI in its hospitals to diagnose patients, it should “first have to demonstrate that AI does not produce racially biased diagnoses.”

In some of Authority Hacker’s previous surveys, data privacy has been highlighted as a top concern when it comes to AI. In healthcare, there needs to be more clarity around the use of patient data – such as mental health providers using patient data from psychological exams to train AI models.

“I’m not sure that everybody who ticks the ‘yes, I agree to the terms of service’ box when they’re submitting their data is fully aware of how their information could be used in some of these large language models,” Webster explained. “That’s something I think is worrying for a lot of people.”

“The more mundane day-to-day issues like data privacy and bias are actually affecting us now,” he continued. “They might be easier to begin with. Governments should maybe prioritize those smaller issues before tackling the big global ones.”