AI technology is turning doctors into unwilling participants in deepfake videos that peddle dubious products or spread false medical claims. This trend has alarmed clinicians, who are now demanding stricter privacy and transparency laws to curb the misuse of their identities.

Why this matters: The surge of AI-generated content on social media platforms threatens to further damage public trust in the medical profession. Beyond misinformation, deepfakes could facilitate insurance fraud, data theft, and even endanger patient safety.

Key Developments

American Medical Association’s Call to Action

The American Medical Association (AMA) recently urged federal and state lawmakers to address what its CEO, Dr. John Whyte, described as a public health and safety crisis. The AMA is pushing for:

  • Legislative action to close legal loopholes and modernize identity protections.
  • Stricter penalties for deepfake creators.
  • Mandates for tech platforms to remove impersonations more swiftly.

In California, lawmakers have already introduced measures requiring disclosures on AI-generated ads and are considering a bill to explicitly ban doctor deepfakes.

Pennsylvania Takes a Stand Against AI Impersonation

On Tuesday, Pennsylvania’s medical board ordered a tech company to cease and desist after one of its chatbots falsely claimed to be a licensed doctor in the state.

Doctors Speak Out on the Rising Threat

Physicians report growing instances of their identities being used to promote unapproved medical devices and wellness supplements. Dr. John Whyte emphasized the severity of the issue:

"It's becoming more mainstream. Everyone knows someone who this has impacted. It's probably occurring more than we hear because people are embarrassed by it."

Among the high-profile victims is CNN’s Dr. Sanjay Gupta, whose likeness has been used in convincing deepfakes promoting products like an alleged Alzheimer’s breakthrough cure. Gupta shared his concerns with CNN’s Terms of Service:

"What was different this time around was just the quality of these ads. This was really quite stunning."

Escalating Risks and Consequences

Legal and Financial Liabilities

Dr. Whyte warned that doctors could face lawsuits if patients are harmed by counterfeit products or misleading advice falsely attributed to them. The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance might provide protection.

Deepfakes Extend Beyond People

Healthcare systems are also grappling with deepfaked diagnostic images and clinical data, which can disrupt operations and patient care. A recent study published in Radiology found that most clinicians failed to detect deepfake X-rays, with one-quarter missing the fakes even after being alerted to look for unnatural textures or overly smooth bone surfaces.

Dr. Mickael Tordjman, lead author of the study and a researcher at the Icahn School of Medicine at Mount Sinai, highlighted the broader dangers:

"There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos."

The Bottom Line

AI deepfakes are undermining the credibility of healthcare professionals at a time when trust is critical to patient outcomes. Dr. Whyte stressed the urgency of the situation:

"We shouldn't have to make the public detectives to determine whether something's not a deep fake."
Source: Axios