Paul Boyer, a psychotherapist at Kaiser Permanente in Oakland, California, has firsthand experience with the AI revolution in healthcare—and he’s not impressed. The health system recently deployed a new suite of AI-powered note-taking software developed by Abridge, a leader in healthcare AI, designed to summarize patient visits at unprecedented speeds. While the technology aims to ease the administrative burden on clinicians, Boyer and his colleagues find it far from perfect.

“It is not super useful,” Boyer said. “Abridge is not good at picking up on clinical nuance or emotional tone—critical elements in mental health care.” For example, he explained, the software struggles to interpret the way something is said, which can be far more important than the words themselves when assessing a manic patient. As a result, clinicians often spend additional time correcting the AI-generated notes.

AI note-taking software is no longer a futuristic concept—it’s already here. Hospitals across the U.S. are rapidly adopting these tools, and research suggests they offer tangible benefits. A study published in April 2024 in the Journal of the American Medical Association found that doctors who used these products most frequently saved over 30 minutes of work per day across five hospitals. Other interview-based studies have also reported generally positive reactions from clinicians where the software is deployed.

Yet, as Boyer’s experience illustrates, concerns about the quality and reliability of AI scribes persist. While clinicians correct errors, safety researchers warn that over-reliance on these systems could lead to missed or obscured critical patient details—potentially harming care. Abridge asserts it rigorously evaluates its software, including monitoring clinician edits, star ratings, and user feedback. “Following deployment of a model, we monitor clinician edits, star ratings, and free-text feedback from clinician users about note quality,” said Davis Liang, the company’s director of applied science, in a statement to KFF Health News.

AI-powered scribe software is just one of many AI tools entering healthcare. Clinicians and patient-safety advocates argue that current regulations are ill-equipped to address the risks these technologies pose. “There is currently no safeguard in place” to vet scribe software at the federal level, warned Raj Ratwani, a human factors researcher at MedStar Health in Columbia, Maryland. Ratwani’s concerns extend further: proposed rules from the Office of the National Coordinator for Health IT (ONC)—the federal body regulating electronic health records—could weaken standards for clarity, usability, and transparency regarding AI use in medical records. Such changes, he cautioned, could lead to incomprehensible records, confusion among clinicians, and ultimately, medical errors.

The ONC has historically emphasized “user-centered design” in electronic health records, a push that began during the Obama administration. However, advocates like Ratwani fear that the proposed regulatory changes could undermine these efforts, leaving patient safety at risk as AI tools proliferate in healthcare.