Site icon Idon Rpg

Risks of remote patient monitoring | Partner voice | MAHA’s AI stumble, OpenAI’s healthcare offensive, Mayo’s AI startups

Risks of remote patient monitoring | Partner voice | MAHA’s AI stumble, OpenAI’s healthcare offensive, Mayo’s AI startups

Buzzworthy developments of the past few days. 

  • The White House’s MAHA report went up, got slammed and then went up again—with fixes. The odd episode seems to have begun when a little-known but intrepid news outlet, Notus, reported in the early morning hours of May 29 that the Trump administration’s first Make America Healthy Again report, Make Our Children Healthy Again, “misinterprets some studies and cites others that don’t exist.” It didn’t take a digital detective to suspect the authors of using AI to do some of their work. 
     
    • Later that afternoon, online-only Notus informed readers that a cleaned-up version of the report, the pride and joy of HHS Secretary Robert F. Kennedy Jr., had been posted “after the White House blamed errors on ‘formatting.’” The day hadn’t waned much before more widely followed news operations picked up the hot story—and understandably piled on. 
       
    • Thursday evening, for example, the Washington Post quoted a distinguished scholar who said he was shocked by the evident carelessness of the MAHA report authors. “Frankly, that’s shoddy work,” said the source, Oren Etzioni, PhD, a professor emeritus of computer science and AI expert at the University of Washington in Seattle. “We deserve better.” 
       
    • As for the feisty news breaker: Give credit where credit is due. Notus, which is an acronym for News of the United States, has only been around since 2023. It’s owned and run by the nonprofit Allbritton Journalism Institute based in Washington, D.C. 
       
    • Just about every major news outlet is on the story now
       
  • California isn’t the only state banning the use of AI for auto-denying healthcare claims. Arizona just became the second. Drafted by a Republican representative and signed into law by Democratic Gov. Katie Hobbs, the new Grand Canyon State law mandates review of any pending AI denial by an individual physician applying “independent medical judgment.” It’ll go live July 1, 2026, giving payers time to prepare for compliance. The law “ensures that a doctor, not a computer, is making medical decisions,” House Majority Whip Julie Willoughby, the bill’s sponsor, says in a celebratory statement. “If care is denied, it should be by someone with the training and ethical duty to put patients first.” 
     
  • Going forward, Big Medtech is going to let AI meaningfully contribute to product development. Mentioning Medtronic and Siemens Healthineers as examples, the industry publication MD+DI passes along the prediction as made by Omar Khateeb, an entrepreneur and the host of a podcast called “State of Medtech.” Khateeb gave the keynote address at a regional conference in New York City last week. In coverage of the event, MD+DI editor-in-chief Omar Ford quotes Khateeb as saying medical OEMs will increasingly use AI to “co-collaborate, getting involved with regulatory from a very early stage. More importantly, all these kinds of things are going to change how we design things.”
     
  • Meanwhile Mayo Clinic is looking to give promising medtech startups a seriously running start. Toward that end, the august institution has launched the new Mayo Venture Partner program on the strength of guidance from three industry veterans. These “MVPs” will identify high-potential opportunities across Mayo Clinic’s research and clinical practices. Then they’ll lend support and know-how to especially innovative new companies, focusing on those using emerging technologies to advance the state of patient care. Mayo says the program offers a chance for investors, CEOs and innovators to collaborate with Mayo Clinic and “be part of a future that prioritizes patient-centric, transformative healthcare solutions.” The three inaugural experts are Amy DuRoss, Audrey Greenberg and Brian Poger. Learn more about them and the program itself here
     
  • This month OpenAI made two big moves to supersize its presence in healthcare. First it released HealthBench, a carefully conceived benchmarking toolkit for assessing the capabilities of healthcare-specific AI systems. Then it plunked down $6.5B for the AI hardware startup called “io,” which was birthed in 2024 by former Apple designer Jony Ive (of original iPhone fame). Analyzing the confluence of the dual stratagems, Forbes commentator Sai Balasubramanian, MD, JD, notes that healthcare AI is coming into its own. This would be hard to miss, he suggests, as tech companies and large hyper-scalers invest billions of dollars to perfect models solely aimed at healthcare use cases. Additionally, he remarks, new hardware and devices “add an entirely new layer to this phenomenon, as users will be able to better use these devices to interact with their surroundings, track their day-to-day health metrics further and have a true ‘intelligent companion.’” For healthcare consumers, he adds, the resulting scenario will be “almost akin to having a live concierge clinician with them at all times.”  
     
  • OpenAI is also living rent-free in Google’s head. So are other companies using AI to change the face of web search. Evidence exhibit A: Last week Google announced 100 new things at its input/output developer conferences—and most of them seemed aimed at keeping pace, or catching up, with OpenAI’s ChatGPT Search and/or Perplexity AI Search. So observes Vox senior tech correspondent Adam Clark Estes in a piece posted May 29. “Suddenly, there’s a new narrative,” he suggests. “The search giant is for sure having a midlife crisis.” Worse yet, the angst comes as Google paces the proverbial floor, hoping against hope it doesn’t get broken up. Its odds of avoiding such a fate don’t look good after two federal judges ruled Google’s search operation is an illegal monopoly. “As AI encroaches on every corner of our digital experience, it’s not clear which company will dominate the next era or how we’ll interact with it,” Estes writes. “It almost certainly won’t be by typing keywords into a search engine.”
     
  • Just about everyone can use an occasional refresher on ‘common’ AI terms. After all, things are changing fast. One person’s familiar tech jargon is always another’s cryptic geek speak. For the sometimes-strugglers, present company included, TechCrunch is out with an updated glossary. Check it out. And maybe bookmark the page until it too becomes outdated. 
     
  • From AIin.Healthcare’s news partners: 
     

What keeps clinicians practicing longer?
At McFarland Clinic, it’s the impact of using Nabla’s Ambient AI Assistant.

From reducing time spent charting to feeling more present with patients, clinicians across 12 specialties are seeing real benefits—with some even saying it’s extended their careers by years.

Hear directly from McFarland providers on how Nabla fits into their Epic workflow and supports the joy of practicing medicine.

📽️ Watch the testimonial and read the full case study

When augmented by AI, remote patient monitoring can help make care decisions and treatment plans based on deep insights into rich data. And that’s regardless of care setting—hospital, home, long-term care facility, you name it. 

But capitalizing on the upsides means attending to numerous risks and challenges. Primary among these are privacy and cybersecurity concerns. Researchers in Europe consider 10 formidable digital hazards in a literature review published May 25 in IEEE Access, a journal of the U.S.-based Institute of Electrical and Electronics Engineers. 

Jolly Trivedi and colleagues at the University of Turku in Finland hope the paper will “offer insights for developing resilient healthcare infrastructures” while “lay[ing] out a roadmap for future research into AI-driven threat intelligence security for remote patient monitoring (RPM) systems.”

Here are segments from three of their 10 key cybersecurity challenges in remote patient monitoring.  

1. Data availability and quality. 

Due to privacy and competitive concerns, threat intelligence information is not always shared among healthcare organizations, Trivedi and co-authors point out. In order to accurately identify new risks and effectively generalize threat countermeasures, they explain, “large and diversified datasets are necessary for AI algorithms” to work with.  

‘Moreover, biased AI outputs could produce false positives or negatives in threat detection.’

2. High computational and resource demands.

It takes a lot of computing power to train AI models for cybersecurity applications, particularly for real-time threat intelligence, the authors note. “A significant amount of processing power, memory and storage is required to analyze massive amounts of data, spot abnormalities in real time and find patterns by AI systems such as deep learning neural networks or sophisticated ML algorithms.” More:  

‘Due to the possibility of resource constraints, healthcare organizations—especially those with smaller staff sizes or less sophisticated IT systems—may find it difficult to deploy and manage AI-based threat intelligence systems.’

3. AI interpretability and explainability. 

AI algorithms, especially deep learning models, are often known as “black boxes” because of the difficulty in understanding how they arrive at specific decisions, Trivedi et al. remind. “In the case of AI-based threat intelligence in RPM systems, this lack of transparency can pose serious concerns,” they add. “Healthcare administrators and cybersecurity professionals need to trust the AI system’s decisions, especially in critical scenarios where data breaches or unauthorized access to patient data are detected.”

‘The inability to explain how an AI model identifies threats or prioritizes security risks can lead to distrust in the [RPM] system [itself].’ 

The paper is posted in full for free. (Click PDF link.) 

  • Other research in the news:
     
  • Funding rounds and IPOs: 
     

link

Exit mobile version