How will AI impact data protection compliance?
HR teams are learning more about the advantages and risks of AI tools such as ChatGPT all the time. One area that could require a review is data protection and GDPR, explains Asha Palmer.
Five years after its introduction, GDPR has succeeded in raising the issues of data privacy and security to the top of the compliance agenda for many responsibly-minded organisations.
Although it has yet to achieve all it set out to do, particularly in unifying how businesses handle data, it has driven significant improvements in the governance, protection, and use of personal information. It has also demonstrated that it is robust enough to cope with rapid technological innovation and the unexpected changes that Covid brought virtually overnight to working practices.
Initially, many believed that fines for non-compliance would primarily relate to security breaches, especially as the threats posed by criminals launching increasingly sophisticated cyberattacks continued to grow.
While fines for security breaches have been substantial, including £20m meted out to British Airways in 2020, most of the higher fines have related to data privacy and the lack of transparency concerning how personal information is processed. Amazon still has the dubious honour of receiving the largest fine to date of €746 million for misusing customer data.
Increased risk
But across the board, fines are increasing. According to a report from DLA Piper, European supervisory authorities reported approximately EUR1.64bn (GBP1.43bn) in fines for 2022, up 50% compared to 2021.
The enforcement of the regulations is still gathering momentum and will result in more widespread fines, meaning employers must remain active and continuously assess their data practices. Although organisations may be adhering to GDPR now, reviews must be ongoing to incorporate changes in procedures and the deployment of evolving technologies, particularly artificial intelligence (AI).
How will AI impact the regulations?
Already used in recruitment and talent acquisition software, the rise of AI tools such as ChatGPT has fuelled speculation about whether this will necessitate changes to the current regulations.
In fact, the existing laws have already led to the collection of more than 80 million euros in AI-related fines. Despite the common misconception that this type of activity is not covered, using AI to obtain and process personal data is already regulated through GDPR.
Take the examples of facial recognition database company Clearview AI, which was fined more than £7.5m by the ICO in 2022, and Deliveroo, which had to pay up to 2.9 million euros to Italy’s data protection authority for discriminating against some of its workers.
If there was any doubt that the current GDPR legislation is lacking, these cases have demonstrated emphatically that everyone has the right to privacy and equal protection from data misuse, including when it involves AI.
That said, tools such as ChatGPT offer massive advantages to HR, from writing job requisitions to developing software code for new training applications and much more. However, many questions remain about how to use generative AI appropriately and ethically. So, HR professionals must play their part too in ensuring it is developed and used responsibly within the workplace.
Managing AI and data
As part of GDPR, organisations must provide individuals with information about the associated logic of any automated decision-making that affects them from the start of the recruitment process and throughout the HR lifecycle.
Automation includes any processes that leverage machine learning or artificial intelligence. Individuals must be advised when an automated decision is being made about them and provided with an explanation of how the algorithm or AI comes to its decision.
HR departments must also ensure that they have measures in place to respond to employee requests for information regarding how AI has been applied in their own particular circumstances
ChatGPT and similar tools will further automate information gathering and decision-making. HR professionals will need to update governance and policies around its usage in the workplace, taking into consideration security, privacy, confidentiality, and ethical implications.
The first stage is to conduct a thorough risk assessment to determine the potential issues associated with using AI and how it could affect employees. This should be as wide-ranging as possible, covering all instances where it could be misused or misappropriated and areas where it might unintentionally create bias or inequality in its decision-making.
It’s important to include any outsourced functions and suppliers that could impact GDPR, such as payroll, expenses, and time management, and understand how their use of AI might alter your organisation’s compliance.
Then policies, procedures and training need to be implemented to control how employees use AI and to communicate to staff how its use also affects them. Policies must be clear and prescriptive, explaining to employees what is and is not permitted, with mandatory training for all staff and contractors.
HR departments must also ensure that they have measures in place to respond to employee requests for information regarding how AI has been applied in their own particular circumstances.
Creating a comprehensive and sustainable generative AI governance structure will require shared accountability between stakeholders who develop the applications and those who use them. Understanding the risks, putting effective protocols in place, and maintaining transparency will be paramount as organisations continue to explore and harness the power of AI.
People analytics opportunities on Personnel Today
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.