Out-Law Analysis 1 min. read
03 Jun 2024, 5:43 am
Businesses adopting artificial intelligence (AI) need to be proactive about data privacy and ensure data collection, processing and storage are in line with both internal policy and data protection laws.
The way in which AI systems acquire data from multiple sources could present major privacy risks for companies. While some data directly provided by users might be obtained with their consent, in many instances, data collected through behind-the-scenes methods like cookies and tracking technologies is largely obtained without individuals’ consent or knowledge. The uncertainty around the acquisition of data is especially evident in the widely adopted generative AI system, ChatGPT.
User prompts are a particular concern for organisations using generative AI systems, as these systems may learn from users’ questions and instructions, and store user prompts in the AI system’s database. When users of these systems are unaware that their prompts may be stored and used to answer questions from others asking similar questions, businesses face the risk of confidential or commercially sensitive information being leaked.
Businesses adopting AI should review the AI system’s terms of use and privacy policy. They should also ensure that the collection, processing and storage of data by the AI system are compliant with the business’ internal privacy policy, as well as the applicable data protection laws. Businesses should:
Practically, depending on the extent to which they decide to use the AI system, it is important that businesses review and update their website data privacy notices accordingly.
They should also consider how to best maintain human oversight in, and provide employee training on, AI processes - especially in sensitive areas - to address errors and unexpected outcomes. By implementing internal guidelines and standards for AI applications, businesses can ensure fairness, transparency, and accountability.