Much of the excitement to date surrounding artificial intelligence (AI) has been about the technical capabilities, rather than the associated moral challenges. But AI will have drastic impact on ethics, not just clinical practice, and ethical codes for AI should start being established right now, according to Dr. Adrian Brady, consultant radiologist at Mercy University Hospital in Cork, Ireland.

Dr. Adrian Brady, consultant radiologist at Mercy University Hospital in Cork, Ireland, will speak about the ethical use of AI in radiology.

The use and manipulation of sensitive healthcare information is open to ethical blunders and dangers if these are not thought through carefully, he told ECR Today. To this end, many professional bodies have published guidelines on how to apply ethics to AI development and use. ESR collaborated with the American College of Radiology (ACR) and other European and North American societies on a multi-society statement published in October 2019.

Brady said that general radiologists must understand about the dangers inherent in AI and formulate strategies for avoiding those hazards. In his talk, he plans to address these key points:

  • Ethical use of AI in radiology should promote well-being, minimise harm, and ensure that the benefits and harms are distributed among the possible stakeholders in a fair manner.
  • AI in radiology should be appropriately transparent and highly dependable, curtail bias in decision-making, and ensure that responsibility and accountability remain with the human designers or operators.
  • The radiology community should start now to develop codes of ethics and practice for AI.
  • Radiologists will remain ultimately responsible for patient care and will need to acquire new skills to do their best for patients in the new AI ecosystem.

The multi-society paper deals with ethical questions of AI within radiology and describes unique moral issues that need to be considered, while steering the community in the direction of future regulation and standards of ethical radiology AI.

“AI in radiology is developing rapidly, and is exciting, but it must be implemented and used with care, ensuring ethical issues are central,” he noted.

Above all, patients expect to be kept safe, and to be confident that any AI involved in their care is free from bias and error. Bias can occur with any training dataset and AI model, according to Brady, and therefore the radiology community needs both research and awareness to recognise bias and to minimise any deleterious effects it may cause.

If AI is to undertake some tasks currently done by humans, it should be better and faster at those tasks than humans, he continued. Additionally, patients want their healthcare data to be secure, used confidentially for their benefit, and not used in any way that exceeds their consent.

Data safety and privacy in training datasets used to develop AI radiology algorithms are already becoming an issue, and patients need to understand about how their imaging data may be used if they consent to making the data available to software developers.

“I want to help the public understand that AI in healthcare is a very complex issue, with many positive and potentially negative consequences. I don’t have all the ethical answers, but I will be addressing many of the questions which will require robust answers before AI can achieve mature and safe use in radiology,” he noted.

What do patients want?

For Erik Briers, PhD, from Hasselt, Belgium, who is a full-time cancer patient advocate, writer, and member of the ESR Patient Advisory Group (ESR-PAG), it is a simple enough equation: if AI speeds up or improves diagnosis and treatment, then patients will support it.

Erik Briers, PhD, from Hasselt, Belgium, a full-time cancer patient advocate, writer, and member of the ESR Patient Advisory Group (ESR-PAG), will talk about patient-centred applications of AI in radiology.

It is important that patients are involved in the development of AI, much as they would be in a clinical trial to develop a new drug, he said.

“The patient’s position should be noted because our involvement can help and inform AI developers and other professionals. The session will be a valuable exchange of views,” noted Briers, who formerly worked in the medical laboratory field.

For patients, it is important that the results of imaging will contribute to selecting the most effective treatment or yielding optimum quality of life and quality of diagnosis. AI can also free up radiologists to concentrate on more important cases and tasks, and this is exactly what patients want, he continued.

Two areas of the debate must be clarified:

The first question is what should happen when there are incidental findings during AI development – for example, the programme spots a broken bone when looking for a lung nodule. He suggests that AI might search for such anomalies as part of its normal remit. AI could increase the number of incidental findings if the algorithm looks beyond the clinical question, with the technology perhaps providing a clearer, quicker answer as to what such a finding might be.

In Briers’ own case, he had to wait a year to find out what his incidental findings were during follow-up for stable disease, and this despite a biopsy. “Incidental findings are one of the side effects of improving technology,” he remarked.

AI presents the radiology community with a unique moral issue, and ethics must factor in new technologies (provided by Dr. Adrian Brady).

The second issue is whether developers should use patient data to train algorithms, without informing the patients concerned.

“If you ask a patient whether you can use their images to improve an algorithm that will help with their diagnosis, staging and treatment, most will say yes,” he said. “If patients know they are participating in an imaging clinical trial to train and test algorithms, then no harm is done. Involving patients in the correct way is a positive direction.”

In addition, he pointed to patients not being fully being aware that AI is being used in their diagnosis. In principle, patients don’t need to know if AI is being used because it runs in the background and does not replace the human radiologist, according to Briers. However, if an incidental finding arises, they should be informed, and any AI diagnosis must be confirmed by a human radiologist, who will also be the person to co-decide on the next step with the patient.

ESR Patient Advisory Group Session
PA Artificial intelligence (AI) in radiology: meeting expectations
Jointly organised by the ESR Patient Advisory Group (ESR-PAG) and the ESR Subcommittee on PIER

  • The untapped potential of AI in radiology
    James A. Brink; Boston, MA/US
  • Managing expectations for a patient-centred application of AI in radiology
    Erik Briers; Hasselt/BE
  • A patient perspective on data privacy in AI
  • Putting ethics fist: key questions concerning AI in radiology
    Adrian Brady; Cork/IE
  • Data sets for training and validation of AI tools
    Luis Martí-Bonmatí; Valencia/ES


Ongena YP, Haan M, Yakar D, Kwee TC (2019) Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire. Eur Radiol. doi: 10.1007/s00330-019-06486-0:

European Society of Radiology (ESR) (2019) Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging. 10(1):105:

Geis JR, Brady A, Wu CC et al (2019) Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging. 10(1):101:

Akinci D’Antonoli T, Weikert TJ, Sauter AW, Sommer G, Stieltjes B (2019) Ethical considerations for artificial intelligence implementation in radiology. ECR 2019 / C-2553: Boonn W, Gefter W, Cook T (2019) Artificial Intelligence and Radiologist Workflow: Overlooked Challenges and Opportunities. ECR 2019 / C-3797: