UPDATED 20:45 EST / JANUARY 11 2026

AI

Anthropic pushes into healthcare to help patients understand their medical records

Artificial intelligence developer Anthropic PBC debuted new healthcare and life sciences capabilities in its flagship chatbot Claude today, saying users can now share their medical records with the service to better understand their health.

Claude now lets users share information from their official medical records and fitness apps such as Apple Inc.’s iOS Health, so it can engage in more personalized conversations regarding their health. The new features are available now for Claude Pro and Max plan subscribers in the U.S.

The launch comes just days after Anthropic’s main rival OpenAI Group PBC debuted ChatGPT Health, and underscores how AI companies view healthcare as a major opportunity for the technology.

Anthropic Head of Life Sciences Eric Kauderer-Abrams told NBC News in an interview that the new features build on last October’s launch of Claude for Life Sciences. That transformed the chatbot into a proactive research partner for clinicians and scientists that can aid in tasks such as drug discovery. In this case, Anthropic is now targeting actual patients, with an aim to help them better understand their health.

“When connected, Claude can summarize users’ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments,” the company wrote in a blog post. “The aim is to make patients’ conversations with doctors more productive, and to help users stay well-informed about their health.”

When it launched ChatGPT Health last week, OpenAI said it was already seeing hundreds of millions of users ask the standard version of its chatbot health- and wellness-related questions every week, hence the enormous potential it sees in making a more concerted effort at tackling medical issues.

However, the company was keen to stress that the app is not intended to be used for diagnosis or to recommend any particular treatment. Rather, it’s simply there to help users “navigate everyday questions and understand patterns over time.”

Kauderer-Abrams said Claude for Healthcare can help users to understand complex medical reports more easily, double-check doctors’ decisions, and also summarize and synthesize medical information for the billions of people around the world who lack access to it.

As with OpenAI, Anthropic was eager to stress the privacy protections it has built into Claude for Healthcare. It explained that healthcare data shared with the chatbot will not be dumped in its memory and will never be used to train future versions of the model. Users also have the option to disconnect their medical records or edit the chatbot’s permissions at any time, the company said.

Besides patients, Anthropic is also targeting healthcare providers, expanding the Claude for Life Sciences offering that’s primarily focused on research. That offering now boasts a “HIPAA-ready infrastructure,” the company said, referring to the U.S. federal law that governs medical privacy.

That means it can connect to federal healthcare coverage databases, the federal registry of medical providers and other services to help make the lives of physicians easier. For instance, the chatbot can help with time-consuming tasks such as preparing prior authorization requests for specialist care, or prepare the ground for insurance appeals by matching patient records with clinical guidelines.

Dhruv Parthasarathy, chief technology officer of Commure Inc., which sells AI tools that aid in the creation of medical documentation, said Claude will help his company to save clinicians “millions of hours annually” and return their focus to patient care.

Though Anthropic and OpenAI clearly see healthcare as a major opportunity, the launch will likely enhance scrutiny over the suitability of these kinds of tools in dispensing medical advice. To date, their track record has been questionable, with Google LLC and Character Technologies Inc. last week agreeing to settle out of court following a lawsuit that alleged their AI chatbots had influenced the mental health of a teenager who later committed suicide.

Anthropic does put out a disclaimer, warning that Claude can make mistakes and should not be used as a substitute for qualified medical advice. “For critical use cases where every detail matters, you should absolutely still check the information,” said Kauderer-Adams. “We’re not claiming that you can completely remove the human from the loop. We see it as a tool to amplify what the human experts can do.”

Image: Anthropic

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.