BSI publishes guidance to build trust on using AI in healthcare

Guidance designed to build greater digital trust in the AI products used to diagnose or treat patients, ranging from medical devices to smartphone chatbots or in-home monitoring tools, has been published by BSI. 

PaO_STUDIO Shutterstock

With the onset of more innovative AI tools, clinicians and health providers will have the opportunity to efficiently make informed diagnostic decisions to intervene, prevent and treat diseases, ultimately improving patients’ quality of life and benefiting society.

As debate continues globally regarding the appropriate use of AI, Validation framework for the use of AI within healthcare – Specification (BS 30440) is being published to help increase confidence among clinicians, healthcare professionals and clinical providers that the tools they are using have been developed in a safe, effective, and ethical way. The auditable standard, which can be used globally, is specifically targeted at products whose key function is to enable or provide treatment or diagnosis or support the management of health conditions.

Forecasts suggest the global healthcare AI market size could exceed $187.95 billion by 2030. Healthcare providers and clinicians may be facing time and budgetary constraints or lacking the in-house capability and capacity to carry out assessments of AI products, so the specification can support decision-making around what tools to use. It can help clinicians and patients evaluate healthcare AI products by considering criteria including clinical benefit, standards of performance, successful and safe integration into the clinical work environment, ethical considerations, and social equitable outcomes.

It covers healthcare AI products used in a range of settings, from regulated medical devices (such as software as a medical device) but also user-facing products (such as imaging software) or patient-facing (such as smartphone chatbots using AI). It also encompasses AI products that are used in the home (such as monitoring products) or in community, primary, secondary, or tertiary care settings. The specification applies to products, models, systems, or technologies that use elements of AI, including machine learning, and is also relevant to the AI system supplier and product auditors. 

It has been developed by a panel of experts partnering to identify best practice, including clinicians, software engineers, AI experts, ethicists, and healthcare leaders, and draws together existing guidance literature and good practice, then translates the assessment of complex functionality into an auditable framework against which an AI system can be assessed for conformity. Healthcare organisations can mandate BS 30440 certification as a requirement in their procurement processes, thus ensuring these systems have met a known standard.

Scott Steedman, director general, standards, BSI, said: “The new guidance can help build digital trust in cutting edge tools that represent enormous potential benefit to patients, and the professionals diagnosing and treating them. AI has the potential to shape our future in a positive way and we all need confidence in the tools being developed, especially in healthcare. This specification, which is auditable, can help guide everyone from doctors to healthcare leaders and patients to choose AI products that are safe, effective, and ethically produced.”

Jeanne Greathouse, global healthcare director, BSI added: “This standard is highly relevant to organisations in the healthcare sector and those interacting with it. As AI becomes the norm, it has the potential to be transformative for healthcare. With the onset of more innovative AI tools, and AI algorithms’ ability to digest and accurately analyse copious amounts of data, clinicians and health providers can efficiently make informed diagnostic decisions to intervene, prevent and treat diseases, ultimately improving patients’ quality of life.”

The specification answers a need for an agreed validation frameworks for AI development and clinical evaluation in healthcare. It builds on a framework first trialled by experts at Guy’s and St. Thomas Cancer Centre and revised through subsequent discussions with stakeholders engaged in the field of AI and Machine Learning. In parallel BSI has been working in partnership with regulators, healthcare organisations and other bodies to consider the role of standards in supporting the regulation and governance of AI in healthcare.

Back to topbutton