AI chatbots have become a first access point for many users seeking health and symptom information. A recent New York Times article ("They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.") illustrates this once again. At the same time, pressing questions arise at the very core of the EU regulatory framework:
👉 When is a health‑related AI chatbot legally considered a medical device under the EU Medical Device Regulation (MDR)?
And: What additional requirements does the EU AI Act impose – particularly for adaptive systems?
When a Chatbot Crosses the Regulatory Threshold into Being a Medical Device
Under the MDR, software qualifies as a medical device if it serves a medical purpose, such as diagnosis, prediction, monitoring, or supporting therapeutic decisions.
Recent regulatory developments confirm this:
- In 2026, the European Commission proposed targeted amendments to the MDR/IVDR to clarify software classification and align functionality with clinical risk.
- In 2025, the Medical Device Coordination Group (MDCG) and the European Artificial Intelligence Board (AIB) jointly issued the first guidance (MDCG 2025‑6 / AIB 2025‑1) addressing the interface between MDR/IVDR and the AI Act.
The guidance states clearly: If an AI system fulfils a medical purpose, both MDR/IVDR and the AI Act apply simultaneously.
Thus, the legal situation is clear:
A chatbot that analyses symptoms, suggests diagnoses, performs triage, or influences therapeutic decision‑making is treated as a medical device – regardless of how it is marketed.
Why a Disclaimer (“No Medical Advice”) Does Not Offer Protection
Many developers attempt to shield themselves with statements such as "This chatbot does not provide medical advice."
However, regulation does not follow disclaimers. It follows actual functionality.
EU guidance emphasises:
- The intended purpose is derived from actual behaviour and the nature of the functions provided — not from marketing language.
- If the system in practice influences diagnostic or therapeutic decisions, it is MDR‑relevant, even if the provider denies such purpose.
In short: If a chatbot behaves like a medical device, it will be regulated as one – disclaimer or not.
How the EU AI Act Increases Regulatory Pressure
Even if a chatbot is not a medical device, it can fall under the EU AI Act.
AI systems in healthcare are typically classified as "high‑risk AI" when they:
- influence diagnostic or therapeutic decisions,
- affect patient safety, or
- are integrated into a medical device.
The AI Act requires, among other obligations:
- Bias control and robust data governance – evidence of representative and non‑biased training data.
- Transparency, traceability, and human oversight – users must understand how the system forms recommendations.
- Lifecycle monitoring, especially for updates and learning behaviour.
Most high‑risk AI obligations apply from August 2026, with full enforcement in 2027.
The result is a dual regulatory regime: MDR/IVDR + AI Act for every AI‑supported system with medical functionality.
Adaptive and Generative Models: The Greatest Regulatory Stress Point
Modern LLM‑based chatbots are:
- non‑deterministic,
- continuously learning,
- context‑sensitive, and
- often not fully explainable.
This conflicts with the MDR’s emphasis on:
- validation,
- stability,
- and predictable performance.
Regulators have expressed this challenge repeatedly:
- The Commission’s MDR/IVDR reform proposals strengthen lifecycle management, post‑market surveillance, and software risk controls for AI‑based systems.
- The WHO warns of a "regulatory vacuum" as healthcare adopts AI faster than legal safeguards evolve.
As a result: Innovative, adaptive chatbots will typically fall into the more stringent regulatory tier.
Should Health Chatbots Be Regulated? – Three Positions
- Yes, if they influence clinical decisions
– This reflects the logic of MDR and the AI Act. - Maybe, if they appear medically authoritative
– "Informational only" can still be relevant if content appears personalised. - No, if they address only general well‑being
– But this is only feasible if the system is technically prevented from giving medical statements – difficult with LLMs.
Where the Regulatory Line Should Be Drawn
A risk‑based threshold is emerging. Relevant questions include:
- How specific is the recommendation?
- How personalised is it?
- How likely is it that users will act on it?
- Can an error cause real harm?
The MDCG/AIB guidance indicates that Europe is moving toward a dual, risk‑oriented model under MDR + AI Act.
In practice:
If a chatbot performs even partially like a medical device, it should be regulated as one.
How DORDA Supports Companies
The DORDA Health & Life Science Team supports companies across the entire regulatory spectrum:
- Classification under MDR/IVDR, including Rule 11 analysis
- Assessment of AI Act applicability
- Drafting the intended purpose statement
- Preparing risk‑based technical documentation
- Designing lifecycle, PMS, and monitoring frameworks for adaptive AI
- Regulatory guidance on generative AI in healthcare
We help startups, tech companies, and established manufacturers bring innovative digital health solutions to the market — compliant, safe, and future‑proof.