Protecting individual’s data in an automated reality shaped by pandemic

by

Charlotte Walker-Osborn, partner and international head of artificial intelligence and technology sector at global law firm Eversheds Sutherland, writes about the role of AI and robots in assisting the NHS.

In April this year, it was announced that Rainbird had partnered with the NHS to build an online tool that provides tailored advice on self-isolation measures to NHS staff. The tool is one of many COVID-19 related apps with automation and/or artificial intelligence (AI) at their heart which seek to deliver solutions to assist with the issues faced by organisations and individuals due to the pandemic.

Like many of the tools, this app requires sensitive and/or personal data to be entered into it for use. With negative press reports of the use of individual’s data captured by apps and tools, how do we ensure privacy and confidentiality considerations are foremost to establish public trust whilst maintaining the innovation that technology can bring?

How do we protect the confidential information and data that is processed by apps?

Given the types of data these apps often harness (for example, health data, location data, etc.), it is critical both to protect the confidentiality of the original data the technology will process (input data) and to protect the confidentiality of any derived data/output data the technology generates. It is also imperative to comply with privacy rules too.

Privacy and personal data

As with any activities involving personal data, a data protection impact assessment should be performed, and organisations, including the NHS, should ensure their contracts with the technology provider allow them to allocate and/or discharge their regulatory and transparency obligations fairly. Depending on the collaboration, it may be possible to ensure anonymised or pseudonymised data sets are utilised. Does personal data really have to be entered into the app for its use? Even if anonymised, there are still data privacy issues to address, including the need to ensure that a data subject cannot be re-identified. Aside from the law itself, there is a plethora of guidance and blogs emanating from the UK privacy regulator, the Information Commissioner's Office (ICO), including an auditing framework for use of personal data by artificial intelligence specifically.

Consents/right to use the data

Clearly, there must be a right to use the individual’s data in the way envisaged both now and in the future. This becomes more problematic where the personal data of individuals is involved, particularly where organisations need either to gain specific consent for usage or find another legitimate reason under the law to be able to utilise that personal data. Where the platform and data belong to different parties, as is generally the case here, it is important to be clear on how patients’/individuals data will be, can be, and cannot be utilised by the technology company in the future. Setting this concept out in the contract with the technology is critical.

Getting the right data and ensuring there is no bias

Where the apps use AI, it is important to take into account that AI technology evolves through use and that iteratively training an AI system using data gives rise to a new model whose properties and behaviour are modified by that training data and generally generates new output data which may be both confidential and personal in nature. There are a number of high-profile examples where either insufficient data, the wrong data (including historical data) or the wrong training of that data has led to the wrong decisions and introduced bias. It is therefore imperative for organisations that these elements of the project are carefully thought through. It will be very important to be able to understand what data was used, how it was tagged, how it was trained and, increasingly, to be able to justify how the decision was arrived at.

AI and cyber security

Ultimately, forms of technology reside on servers, whether the NHS or other organisation’s own servers, the tech provider's or in the cloud. Data and technology inevitably bring cyber risk. It is therefore critical that organisations address this in their contracts with their technology providers. It is also crucial that a detailed analysis takes place early on regarding the technology set-up, the data flows and the security. The analysis is, largely, the same as for other technology projects. This type of due diligence and cyber resilience focus is crucial to protecting individual's data and ensuring confidence in the tools.

Governance and transparency

Close governance, transparency, and auditability around the use of data/confidential information throughout the project is vital. This right should be contracted for, including how confidential information/data is treated during and at the end of the project (which may, depending on the solution, include expunging the data). Where the technology involves AI, this is likely to be enshrined in AI specific law both in the EU and the UK in the coming year or so. The EU’s Whitepaper on Artificial Intelligence is currently being consulted on and the UK is performing a similar exercise looking at rules around AI and data.

Conclusions

There are many technical, data governance and contractual steps which organisations can and should take to protect the confidential information and personal data of both their own organisation and their customers/patients when adopting technology solutions. The above is merely a snapshot of the key points. Ultimately, detailed analysis of the information being placed in the systems and how derived data is created and used is at the heart of this. More than ever, contracting carefully around ownership, licensing, treatment/processing and usage of this confidential information and personal data is crucial, as is close governance of the project throughout.

Please note that the information provided above is for general information purposes only and should not be relied upon as a detailed legal source.

Back to topbutton