Laytons Artificial Intelligence (AI) Series: Data protection and Artificial Intelligence (AI)

Artificial intelligence (AI) paves way for innovative technology advancement which benefits society, but there is also a potential for it to affect individuals' rights and freedoms. Consumers have become increasingly concerned about data security in recent years, resulting in "techlash" - a growing negative feeling against large tech companies.


AI uses data for various purposes including but not limited to training and testing AI systems. Due to the various ways AI is applied, different data protection considerations exist for different AI systems. There is no one size fits all approach to tackling data protection issues. Data protection legislation takes a risk-based approach, requiring organisations to comply with its obligations and implement appropriate measures in the context of specific circumstances. In the UK, data protection is primarily governed by the Data Protection Act 2018 (DPA) and the UK General Data Protection Regulation (GDPR), which regulate the processing of personal data.


GDPR Principles

The seven principles under GDPR are lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality (security) and accountability. These principles should be embedded into the development and use of AI. We will consider some of these principles in this article.


Lawfulness

AI can involve processing personal data in various ways and for various purposes. Therefore, it is important to identify the purpose and appropriate lawful basis for each distinct processing operation involved.

One of the lawful bases is consent. If this is relied on, you need to ensure that consent is freely given, specific, informed, and unambiguous and involves a clear affirmative act by individuals. In practice, it is often difficult to ensure valid consent for further complicated processing operations. A possible solution is to have a process of graduated consent throughout the different stages of processing. However, since consent may also be withdrawn, if consent is relied on as the lawful basis, the processing of AI must cater for the ceasing of processing of the relevant data if consent is subsequently withheld.

Alternative lawful bases may be available. According to ICO guidance, you must select the lawful basis that most closely reflects your relationship with the individual and the purpose of processing.


Fairness

When assessing fairness, you must consider how your use of personal data will affect the interest of individuals. Consideration must be given to whether the use of personal data will be within reasonable individuals’ expectations. You can use a privacy notice to set out purposes for which data is required and explain how data will be used.


Transparency

There must be transparency in the way you process data. You must be clear, open and honest with individuals on how and why you use their personal data. In effect, organisations may gain a competitive advantage from gaining consumer trust. This can be accomplished by explaining how decisions are made.


Accuracy

You must ensure that personal data you process is not “incorrect or misleading as to any matter of fact” and where necessary, is corrected or deleted without undue delay. AI learns from data, and the more data on which AI is trained, the more statistically accurate the system becomes. However, you must balance this with the principle of data minimisation, and if you can achieve accuracy with processing less personal data, you should do so.


Data Minimisation

Article 5(1)(c) GDPR states that personal data shall be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation)”. At first glance, this appears to be a barrier for AI, which relies on large amounts and types of data.

However, ICO guidance confirms that this does not mean either “process no personal data” or “if we process more, we’re going to break the law”. The key point here is to only process personal data that is required for your purpose. The fact that you may use the data in the future will not in itself justify its collection, use and retention.


Tackling bias and discrimination in AI output

Processing of personal data must be fair. Personal data must not be used in a way which leads to unjustified adverse effects. Processing personal data for profiling and automated decision making may lead to bias and discrimination within AI data output. To protect individuals’ right to privacy and right to non-discrimination, organisations must implement appropriate technical and organisational compliance measures (see below).

Bias and discrimination can also be caused by imbalanced data and/or training data which may reflect past discrimination. Therefore, organisations need to obtain high quality training and test data for its AI systems.


Controller-Processor Relationship

AI systems typically involve multiple organisations processing personal data, making it difficult to determine whether you are a controller, processor, or joint controller. As a result, you must be clear from the start, within the contract, about the roles and responsibilities of the organisations involved, as this will impact compliance and liability. You will be a controller if you have overall control over the purpose and means of processing personal data. Conversely, you will be a processor if you have no purpose of your own for processing the data and only act on client instructions.


Compliance with the Principles

Data Protection Impact Assessments (DPIA)

Majority of the AI systems will process data which may result in high risks to individuals’ rights and freedoms, and this triggers the legal requirement for you to undertake DPIA. DPIA helps organisations to identify whose data is used, the extent it is likely to affect individuals and the possible mitigation measures.


Anonymisation

You will need to consider whether you need to use data that identifies individuals. Is it possible to strip out data which identified individuals before analysing the data through anonymisation of data? If the data can be anonymised so individuals cannot be identified from the data and still achieve your objective, it should be anonymised. The key point here is to mitigate the risks of re-identification of an individual.


Privacy Notices

Privacy notices should be in plain English, user-friendly form with person of an average reading age in mind. For example, a clear video can be used to explain how collected data is used. ICO guidance further provides that standardised icons can be used to explain the processing.


Future of data protection in UK

The second reading of the Data Protection and Digital Information Bill which would reform the UK data protection regime has been delayed. The UK government has indicated that it still intends to reform the regime to make it more business and consumer friendly.

With that being said, it is expected that more changes within data protection in the UK will be coming. Although it is uncertain at this stage what these are, the changes will likely lead to more divergence from the EU data protection regime. Companies operating internationally will therefore need to be mindful of the changes so that all relevant regimes are complied with.


Key takeaways

  • Organisations must ensure compliance with GDPR principles to build consumer trust which can make its product more appealing.

  • High quality data allows organisations to better benefit from AI and facilitates more accurate AI output.

  • Anonymise data where possible to mitigate risks.

  • Be clear in contracts on who is controller, processor, or joint processor.

  • There is no one-size-fits all approach when it comes to data protection issues. Instead, organisations should consider adopting a combination of complementary approaches and tools to suit the organisation.


If you have any questions or require advice, please reach out to Carmen Yong or Esther Gunaratnam in our Corporate & Commercial Department.