It’s getting interesting – because Anthropic has just launched health integrations for Claude, which allows you to connect data from fitness and health applications straight to their models. For this type of thing: analyzing data from the watch, PAP device, night pulse oximeter and in the field of periodic / specialized tests, I have ChatGPT: but I have to put practically all this data into conversations myself. It’s all because of EU legislators, who generally think soberly about our privacy, but their legislative mills effectively block interesting functions in AI.
I know this may be too much for some people: but I also understand that a lot of language model users use them this way. Hence, OpenAI has ChatGPT Health (but not available in the EU and the same with Claude, unfortunately): a separate space for health analysis and integration with Apple Health and Google Health.
Health integrations for Claude are essentially the ability to connect health and activity-related apps so that the assistant has access to numbers that we typically view broken down into charts, tabs, and “closed rings.” However, many people do not know what these numbers mean and cannot draw conclusions from them. This does not reflect poorly on them, health applications rarely translate and extract the “meat” from the data on their own.
Thanks to this, Claude is supposed to summarize the history of our health and explain the parameters “in human terms”: what they mean, what their causes may be, and how to understand them in the context of everyday life. The less medical jargon, the more meaning it makes for the user.
Anthropic also makes sure not to overdo it in communication. The company emphasizes that Claude’s answers do not replace medical consultationand more serious matters should be assessed by a professional. This is typical insurance, but necessary. It has already happened that people scared by a “consultation with Dr. GPT” ran to their doctors and insisted on a specific medical procedure because “they might die.” As it turned out most often, they were fine and the model simply overshot. You also remember that no AI will replace your doctor.
Foundation in data
If you were to find the TOP of the most common health recommendations on the Internet, what would they be? Sleep better. Get moving. Drink water. Standard. Without data, such general information makes no sense – and the user still gets a checklist that applies to everyone and no one at the same time.
Connecting health data to AI shifts the focus to specific advice that points to a foundation in the data. Suddenly, the assistant does not rely on declarations, but sees trends: decreased activity, poor sleep, heart rate spikes, lack of regeneration. AI is generally good at spotting trends and patterns and this can be used well and wisely.
So Claude can piece together things that you would normally look at separately in health apps: history, averages, changes over time. It can also help you notice relationships that often escape us.
Privacy
Health data is some of the most sensitive information a person can give to a technology company. Even if we are talking about training parameters, we are still talking about very sensitive things: much more can be read about the user from such data than from his activity on the Internet in general.
Anthropic promises “tough rules” for information protection. Data connection required explicit consentand health-related conversations are not intended to go into either model training or chat memory. All because in AI assistants, memory is one of the points that is generally least trusted. At the same time, the company did not specify whether the data is processed in the background in order to eliminate fraud, develop models, etc. This does not necessarily mean anything bad – but the lack of specifics gives users space to develop some… anxiety.
Claude comes to the doctor, and the doctor…
At the same time, Anthropic announced something more important: Claude for Healthcare. This is a version for medical professionals, compliant with the American health data protection standard. The stakes are higher here, because it is about the support of medical professionals, and the requirements will be much, much greater.
Claude for doctors is supposed to connect to databases, patient records, a set of codes regarding diagnoses and procedures in applicable standards, etc. Instead of reading documentation, regulations, classifications and patient history manually, a doctor can query the system in natural language and get the answer he needs.
Claude promises to save time, paperwork and money. AI must be extremely humble and as effective as possible – one mistake in a clinical context can be a tragedy. So the company went “all in”.
Users are already testing
Initial reactions to health integrations for Claude are, interestingly, quite mixed. Some people post years of data, build personal dashboards, look for trends, and try to “understand themselves” through statistics. A little sensible, a little neurotic, I would do it myself. Oh, wait: that’s what I do, but “around”.
The second group is skeptical, mainly because of privacy. And they are hardly surprised, because the technology industry has a long history of selling privacy as a promise and then jumping on this promise in high heels, seriously. Anthropic talks about a “secure data container” inside Claude, but people are already convinced that marketing gibberish is not a guarantee for them. They are right, undoubtedly. The technology industry is one of the most corrupt: because it’s all about money. And since we are talking about absolutely huge ones, they spoil management boards and shareholders even more.
AI wants to be everywhere
The largest players on the AI market are clearly trying to expand their influence into other areas of the user’s life. OpenAI has its own integrations of this type and is starting to show ads to users of lower plans. Microsoft pushes Copilot into virtually every corner of Windows and invests heavily in data centers. Google is pushing itself with its elbows and legs: because it has money for the development of data centers and models: practically unlimited.
Read also: Amazon invests $4 billion in Claude AI. He has a hidden agenda
So let’s not be surprised that artificial intelligence wants to “help” us (this may be a bad word, because we help selflessly, and the companies behind artificial intelligence models ARE NOT selfless) in everything. He simply wants to manage all areas of our lives. What for? For money. If an IT company tells you that it does something for you, then… it is fooling you. You would have survived without her care: maybe a little less comfortably, but still.
