The use of artificial intelligence (AI) systems by businesses operating in the Dubai International Financial Centre (DIFC) is set to be governed by new regulations under proposed changes to the free zone’s data protection regime.
The DIFC is consulting on amendments to existing data protection regulations built on DIFC Data Protection Law No. 5 of 2020. Among the changes planned are new “controls and guardrails” on the processing of personal data via “digital enablement technology, such as artificial intelligence and autonomous and automated systems.
Under the plans, businesses would be required to ensure that the AI systems they use are designed in accordance with principles of fairness, ethicality, transparency, security and accountability.
They would further be obliged to tell data subjects where they use AI systems to process their data – including about the purposes of the data processing, the output the AI system produces and the manner in which the output is used, and where further processing of their personal data on a basis that is not human-initiated or directed may be undertaken. They also need to notify data subjects about how use of the system could impact on the way they exercise their data subject rights.
Provision is also made under the proposed new regulations for the use of AI systems in the DIFC to be subject to audit or certification requirements in future, while businesses would also face record-keeping duties as well as disclosure obligations in relation to their efforts to address “unjust bias” and support law enforcement agencies in preventing or prosecuting crimes.
The DIFC said: “Generative machine learning or enablement is potentially extremely useful in terms of positive outcomes such as sustainability, transparency, accountability, and improving quality of life. At the same time it is potentially extremely dangerous in terms of resulting unwanted bias, controversial political or financial implications, or in impressions or directions of actions that negatively impact the data subject himself.”
“Implementing basic technical, organisational, and ethical obligations of controllers and processors are the starting point for ‘regulating’ any types of generative, machine-learning, large language model systems. This is because they are still a vastly unknown quantity but the ability to assert controls and concepts in order to direct the processing and mitigate risk is not. Until such systems and use cases are better understood, setting out regulations reinforcing relevant controls and concepts to fairly and ethically develop them is of immediate concern,” it said.
Other changes being consulted on include new rules around the reporting and handling of personal data breaches, including in the circumstances where a person inadvertently comes into the control or possession of personal data, for example, by being mistakenly sent an email containing personal data, and on the collection and use of personal data in the context of providing digital communications and services.
The consultation is open to feedback until 17 May. Technology law expert Martin Hayward of Pinsent Masons in Dubai said the proposals should prompt businesses in the DIFC that use AI systems to get to understand better how their systems work – particularly where they are adopted globally across a multinational organisation or group of companies – and what information they might need to provide to data subjects.
The DIFC is not the first jurisdiction to consider how AI should be regulated.
A new AI Act is proposed in the EU, for example. That draft legislation is currently being scrutinised by the European Parliament and Council of Ministers and envisages regulating AI systems in accordance with the level of risk they are considered to pose.
In contrast, in the UK, the government is proposing to retain the existing sector-by-sector approach to the way AI use by businesses is regulated but to supplement that with a cross-sector framework of overarching principles that regulators will have to “interpret and apply to AI within their remits”. The five principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
Ross McAlister of Pinsent Masons said: “In respect of the principles, the DIFC proposals closely resemble what is planned in the UK, though an apparent difference is that businesses in the DIFC would be required to provide evidence to any affected party of any algorithm that instructs the AI system itself to seek human intervention, as opposed to the data subjects themselves requesting human intervention, where the processing may result in unjust bias, or relevant party where the system is accessed by government authorities – including law enforcement authorities.”