The intersection of technology and healthcare has given rise to a burgeoning industry known as femtech, which focuses on female health and wellbeing. Leeanne Baker, managing director - senior QA/RA consultant at
Shutterstock
This sector has seen significant growth in recent years, with 2023 witnessing an overall investment of $1.14 billion across 120 deals. However, despite these impressive figures, less than 5% of public-funded research in the UK is dedicated to reproductive health, even though it causes health issues for a third of women. This disparity underscores the urgent need for more focused investment and research in this area.
The role of AI in femtech
Generative Artificial Intelligence (GenAI) is a rapidly emerging technology that is revolutionising practically every industry and it is poised to do the same to the femtech. GenAI can analyse vast amounts of unstructured data and identify patterns, offering new insights and potential breakthroughs in female health. For instance, AI can be used in genetic testing to provide personalised health advice or to identify patterns indicating underlying health conditions early on allowing preventative management or early treatment to improve outcomes.
However, the deployment of AI in FemTech is not without challenges. In the European Union, medical devices or in vitro diagnostics that incorporate AI solutions which could impact safety of the product are classified as 'high-risk' under the EU AI Act, necessitating additional regulatory obligations. In fact, an algorithm disproportionately assigning false negatives due to bias in the underlying data or AI development methodology could drive fewer follow-up scans, and potentially more undiagnosed and then untreated cases. This applies to femtech solutions and diagnostics too and is due to a number of elements, including a historical bias in medical research, where women have often been underrepresented, misdiagnosed or incorrectly accounted for.
Bias may be exacerbated by differences in a population or setting which act as confounding factors not seen in the characteristics from its training environment. For example, training data focussed on a single hospital will account for the policies, processes, tools and demographic context of that hospital. When an AI tool is released with limited training context and used in another hospital, the differences in patient case handing can result in a lower predictive accuracy as the model is not tuned to the hospital’s different approaches. A large hospital in an inner city is likely to see very different patient demographics, co-morbidities, environmental and lifestyle factors, alongside differences in clinician specialisation compared to a small hospital in suburban or countryside setting – all of which can contribute to AI bias.
Both regulators and manufacturers must therefore work to ensure their AI models are developed to reduce and where possible eliminate sources of bias and that an adequate regulatory framework that accounts for the peculiarities of this new tool, are developed and applied – and that suitable methods are in place to identify confounding factors and reduce bias through context specific model tuning or improvements to underlying model as more diverse and rich training data becomes available.
The issue of gender bias in AI
As existing AI models are often trained on limited datasets that do not adequately represent the diversity of the population, there is a risk not only of perpetuation but of amplification of existing disparities. For example, during COVID-19 vaccine trials, 28.3% of publications did not report the sex distribution among participants, and only 8.8% provided sex-disaggregated vaccine effectiveness estimates. A 2010 survey examining 2000 animal studies confirms this trend, finding that 80% included more males than females, while as late as 2016, 70% of biomedical experiments did not include sex as a biological variable, where sex as a biological variable was included, less than half of them included both males and females. This lack of detailed data can obscure differences in how men and women respond to treatments, potentially leading to suboptimal care.
Addressing bias in AI
Recognising the potential for bias and the need for robust oversight, the European Commission proposed the EU Artificial Intelligence Regulation in April 2021. This act aims to establish harmonised rules for the marketing, service provision, and use of AI systems within the EU. Additionally, the European Health Data Space, seeks to create a common space for the exchange and access to various types of health data. This initiative aims to build a transparent and secure data system that supports high-quality data exchange for AI applications in healthcare. Similar proposal are being enacted in the UK related to access to NHS data for research including medical AI systems.
To mitigate bias in AI, several strategies can be employed across the model development lifecycle. These include:
- Pre-processing data: Sampling and curating data before building the model to ensure a balanced representation.
- In-processing adjustments: Implementing mathematical approaches to incentivise balanced predictions during model training.
- Post-processing adjustments: Adjusting the model's output to correct any imbalances.
Moreover, involving human experts who understand and can identify the specific biases present in datasets and experts in clinical practice who understand the confounding factors related to the medical field in which the AI system will be used can help ensure the AI remains fair and limits bias. Secondarily, deploying system with a human-in-the-loop approach, where the clinician has additional information and interaction with the patient to confirm the results of AI systems is seen by many as crucial for maintaining the integrity of AI systems in healthcare.
The rise of FemTech and the integration of AI in medtech present exciting opportunities to improve female health and wellbeing. However, these advancements come with significant challenges, particularly regarding bias in AI models. Addressing these challenges requires comprehensive strategies to detect and mitigate bias, along with robust regulatory frameworks to ensure performance and efficacy of the end solution. By doing so, we can harness the full potential of AI in FemTech, leading to more equitable and effective healthcare outcomes for women globally.