Healthcare marketing experts say that in the short term, this means health brands should prioritize AI optimization to increase the chances of products becoming recommended by chatbots. “Some of the most impactful work here is less visible, such as metadata, content consistency and structured libraries,” said Julie O’Donnell, global head of digital at healthcare consulting firm Inizio Evoke. “Let’s be honest, these are areas where many health and wellness brands have historically been weak. In an AI-driven environment, that has to change. You don’t need to be the loudest or biggest consumer — brands that operate smarter and systematically build credibility can reap meaningful benefits.”
However, experts are divided over what this shift toward AI-driven health will mean for traditional trackers. Some warn that while first-generation health tracking devices like the Whoop and Oura rings brought consumer data, wearables and their apps could become mere purveyors of sensory data if ChatGPT Health becomes the go-to platform for interpreting health data and personalized recommendations.
But U.S.-based health tracking ring Oura tells wearers it offers wearers an AI-powered “Oura Advisor” in its app fashion business It sees new features as validation of user needs and as a tool to “complement” its own advisors.
“Oura Advisor is unique in that it is built on continuous biometric data and long-term baselines for each member within the Oura app, so it can translate real patterns and behaviors into contextual, actionable recommendations and provide clear guardrails around clinical adjacency,” a company spokesperson said. “In this sense, the general AI tool and Oura Advisor are complementary—one provides broad information, while the other translates personal data measured directly from wearable devices into tailored guidance.”
If consumers increasingly feed their wearable data into general artificial intelligence models, then wearable device brands have a greater responsibility to improve the quality and provenance of the data they collect, experts say.
“Many wearable metrics are proxies or estimates of underlying physiological processes, and as conversational interfaces take center stage, the accuracy and stability of these inputs becomes critical,” said Billie Whitehouse, CEO of Wearable
accuracy issues
OpenAI said it has worked with more than 260 clinicians over the past two years to help develop ChatGPT Health’s response. While specific AI software already used in clinical settings is regulated and tested for use in healthcare, general-purpose chatbots do not need to meet the same standards. Large language models (LLMs) such as ChatGPT draw from thousands of Internet resources, some of which are more reliable than others.
Physicians, on the other hand, consult the latest peer-reviewed empirical studies, which often reside behind the paywalls of medical journals. They are also trained to be empathetic—something AI chatbots have historically struggled with and a key factor in managing the mental health risks associated with obsessively tracking our health.
Like experts in other areas of AI development, health care experts say it’s critical that human practitioners understand the situation. They say that while ChatGPT Health can help save doctors time, it’s critical that the output is signed off by a trained, licensed physician.
“Misleading statements about medicine have been a concerning issue for some time. On the one hand, ChatGPT may be more accurate than social media. On the other hand, patients should definitely not self-diagnose, as doctors take into account a large number of nuances to calibrate diagnoses,” said Dr. Charlie Cox, consultant at Reborne Longevity. “It’s important that there are very clear safeguards – particularly for more vulnerable individuals – to reduce the risk of misdiagnosis, but overall the net benefit should be positive.”

