Back to overview
October 2, 2025

AI as your new medical assistant: who pays the bill if something goes wrong?

Artificial intelligence (AI) is no longer a pipe dream: it’s a reality, and it’s already transforming healthcare in your hospital. This smart technology helps with diagnoses and predicts the course that an illness will take. This offers tremendous opportunities, but also raises a crucial question: what if the algorithm makes a mistake? Who is liable?

U7353792727 Photorealistic image Middle Eastern woman in a medi 50935e9c 1097 474a 9eec 02e8903cc5b3

AI’s added value in practice

The impact of AI in medicine is huge. As well as improving diagnoses, the technology also optimises care processes and treatments. Here are a few examples of what it offers:

  • Faster, more accurate diagnoses: Algorithms analyse medical images such as CT scans or mammograms with impressive precision. They detect abnormalities that are difficult for the human eye to detect, leading to faster and better diagnoses.
  • Predictive and personalised care: By analysing large volumes of patient data, the software learns to spot patterns that indicate an increased risk of diseases such as sepsis. This makes preventive interventions and personalised treatment plans possible, for example by predicting how patients will respond to chemotherapy.
  • More efficient hospital management: The technology automates repetitive tasks such as planning and record management. It also helps predict bed and staff demand, allowing for more efficient use of hospital resources.
  • Better patient support: Smart chatbots keep patients informed and enable remote monitoring – particularly useful for patients with chronic illnesses.

Who is liable if something goes wrong?

This technological advance creates a complex legal problem. Doctors used to take the decisions as the sole expert; now, though, there is a highly influential digital assistant. Several important questions are raised:

  • Who bears ultimate responsibility? Do doctors always remain ultimately responsible, even if they rely on the output of a certified system?
  • When is the product defective? Can the hospital hold the software developer liable if the algorithm doesn’t provide the promised security?
  • What is expected of the doctor? Should doctors be prepared to ignore an algorithm’s recommendation? And what if this leads to harm?
  • What’s the role of the hospital management? The management is responsible for purchasing secure technology. Directors’ and officers’ liability may be triggered by a bad choice.

The impact on your insurance policies

These new risks have a direct impact on your hospital’s and doctors’ insurance.

  1. Professional and commercial civil liability: Under Belgian law, AI can be considered ‘auxiliary equipment’. This means that the healthcare provider is in principle liable for its failure, just as with any other medical instrument. The ultimate responsibility therefore often remains with the doctor, who must interpret and validate the output.
  2. Directors’ and officers’ liability: Directors are responsible for strategic investments. When purchasing AI systems, they must check their reliability and certification thoroughly. If they act negligently, they may be held personally liable for a management error.
  3. Cyber ​​insurance: AI systems are vulnerable to cyber attacks. A cyber policy is essential for covering the direct damage arising from an incident, but it usually doesn’t cover the consequential damage of a medical error caused by a hacked system.

How should you prepare for this now?

Don’t wait for the upcoming European AI Act. Take proactive steps now to manage the risks:

  • Thorough screening before purchase: Ensure that suppliers and their products are thoroughly checked.
  • Unambiguous contracts: Determine liability contractually with the software vendor. Who is responsible for updates and bug fixes?
  • Clear protocols and training: Develop internal protocols for using the software and make sure your employees are given thorough training that covers both its possibilities and its limitations.
  • Human oversight remains crucial: Emphasise that AI is a tool, not a replacement for clinical judgement. The doctor must always make the final decision.

The rise of AI in healthcare is a positive development. However, a proper understanding of the risks and a proactive approach are crucial to making the most of the opportunities while simultaneously ensuring patient safety and protecting your institution. As your insurance broker, we keep a close eye on these developments and proactively advise you on the best way to adapt your policies to the risks of tomorrow.

Related posts

U7353792727 photorealistic image a cheerful senior man and a

New Motor Vehicle Liability Law (WAM): mandatory insurance for vehicles in care settings.

Healthcare
27.09.2025

In your care institution, mobility scooters, electric wheelchairs and special bicycles such as electric rickshaws are indispensable. They increase your residents’ and patients’ independence and enable them to enjoy outings. However, the Motor Vehicle Liability Law (WAM) has recently been amended, with significant consequences for the requirement for insurance for these vehicles, even if they are only used on your own premises. In this article, we set out clearly what these changes mean for your care institution and how you can make sure you are properly insured.

Read more
Read more about New Motor Vehicle Liability Law (WAM): mandatory insurance for vehicles in care settings.
U7353792727 Photorealistic image woman working on a laptop on f 9dc402b6 7d2f 47fe a7cb 098992f2b0d6

New 2024 Indicative Table: compensation significantly increased

People
26.09.2025

Are you the victim of an accident resulting in physical injury for which someone else is liable? If so, you are entitled to compensation. Thanks to the new 2024 Indicative Table, compensation for such injuries is now considerably higher. What’s changed and what does it mean? Find out below.

Read more
Read more about New 2024 Indicative Table: compensation significantly increased
MASF19306

Being open after an incident: how frank disclosure strengthens trust in healthcare

Healthcare
23.06.2025

When something goes wrong in healthcare, there is a lot at stake besides the patient’s health: the patient’s trust, the healthcare institution’s reputation and the care provider’s well-being. In such situations, open and honest communication is crucial. Open disclosure is a structured approach to holding difficult conversations, with empathy and transparency as core values. During a recent symposium at one of our hospital clients, this subject was discussed from the legal and insurance viewpoints. We were there and would like to share the insights.

Read more
Read more about Being open after an incident: how frank disclosure strengthens trust in healthcare
U7353792727 A photo by Eliot Porter of a young Asian woman of a f945f695 46f3 4f1e 9f93 1bbd049c6208

How your flexible mobility choices affect your work accident insurance statistics

Public sector and government
20.06.2025

An employee falling off a lease bike on a wet cycle path. An e-scooter being hit by a car during the morning rush hour. Or a seemingly harmless fall that leads to months off work. These are scenarios that you’d rather avoid as an employer, but they are becoming increasingly common. So what’s the cause? The rise of alternative mobility, often encouraged through the ‘cafeteria’ (flexible benefits) plan. The well-intentioned promotion of sustainable and healthy choices can unintentionally lead to an increase in work accidents. And that affects both your insurance and your costs.

Read more
Read more about How your flexible mobility choices affect your work accident insurance statistics