AI is being widely used in surveillance software to enhance the capabilities of monitoring and analysing large amounts of video data, and to allow for more efficient and accurate surveillance. France’s plan to trial AI-powered video surveillance technology during the 2024 Olympic Games is the most high profile example of AI’s role in detecting security threats, tracking people, and providing real time alerts at large events.
Risks associated with using AI tools for surveillance include claims of discrimination and unfair treatment by certain groups, and the breach of privacy and data protection laws. But there are several contractual solutions available for users to address these issues.
These should be assessed along with recent developments in the regulations, such as those recently in the UK, France and Spain.
The usage of AI-powered monitoring software
Algorithmic video surveillance, more commonly known as ‘smart cameras’, uses computer software to analyse images captured by video surveillance cameras in real time. Algorithms are being trained to detect predefined suspicious events, such as specific objects, behaviours, patterns in video footage, and to carry out movement analysis. This technology can be used for tracking or identifying abnormal events, crowd movements, and the demographics of filmed people such as age range and gender, among other things.
The UK government has invested in AI technologies for crime prevention. It plans to double the Safer Streets fund to £45 million, which facilitates not only the use of CCTV cameras in public places, such as parks, but also installing new AI-driven upgrades to process the information gathered. The AI software automatically analyses unfolding situations and identifies known suspects, suspicious objects and recognises unusual behaviour, providing useful insights to police.
Network Rail’s Crowd Monitoring Solution at Waterloo Station, developed by UK-based company Createc, is another example of how AI is deployed in surveillance. The system recognises early signs of suspicious behaviour and security operators receive real-time updates on crowd density and movement patterns to identify bottlenecks. The focus is now on developing the technology to recognise incidents such as people falling and malfunctioning escalators. The trials at Euston Station and Luton Airport illustrated that the technology helped prevent overcrowding at the station during delays and has the potential to be used in bigger venues, including stadiums.
In France, smart cameras can be used in both the public and private sectors. In the public sector, AI-enhanced video surveillance has been used for tasks such as detection of abandoned baggage or for the exercise of administrative and judicial police powers by public authorities. In the private sector, the technology can be used to secure people and property in shops, concert halls or other establishments open to the public by detecting certain situations or behaviours. But this deployment is strictly supervised and limited by the French data protection authority (CNIL).
In preparation for the 2024 Olympic Games, France has recently given the green light for the trial of algorithmic video surveillance. This decision is aimed at ensuring the security of “sporting, recreational, and cultural events” until 31 March 2025. The experiments have already begun; in April 2024, algorithmic video surveillance was used during a football game and a concert.
Similarly, smart cameras are being used in in Spain. In the public sector, some law enforcement bodies in Spanish cities use AI-driven surveillance systems to prevent crime and improve public safety. The implementation of AI-enhanced cameras by the General Directorate of Traffic (DGT), in particular, has marked significant progress in road surveillance and control. These cameras are seen by the government as a key tool to increase road safety and reduce traffic offences.
In Spain’s private sector, a number of companies use smart cameras for near-instant detection of health and safety risks, from images taken by cameras installed on production sites. This type of software uses AI to identify a dangerous situation and alert supervisors to put an end to it. This technology can also be used to identify unsafe acts, unsafe conditions, and non-compliance with mandatory equipment requirements.
Potential issues of using AI in surveillance software
Although AI-powered surveillance tools are becoming widely used, business users need to pay particular attention to several issues.
AI-driven surveillance technology can result in discrimination and unfair treatment of certain groups of people. It needs to be developed and trained in a way that minimises and preferably eliminates the risk of unfair or unintended bias with regards to protected characteristics, such as age, sex, disability and ethnicity, to avoid the customers of such technology being subject to discrimination claims. This requires forward-facing obligations on the technology supplier to ensure that mechanisms are maintained to monitor, detect and minimise discrimination.
AI-powered surveillance technology may also be liable to breach privacy and data protection laws. The use of this technology involves the collection and analysis of personal data without consent of the individuals. The lack of control over personal data means companies using these solutions have to put in place robust storage and appropriate safeguards, and ensure that an appropriate legal basis for collection of the data is relied upon,. There is also the heightened risk of cyber-attacks that may compromise the integrity of the data collected.
In France and Spain, the data protection regulators have voiced strong concerns around discrimination, privacy and data protection. The CNIL stated that these new video tools can lead to massive processing of personal data, which sometimes includes sensitive data, and that the people concerned are not just filmed, but automatically analysed to deduce, on a probabilistic basis, certain information that may enable decisions or measures to be taken concerning them. The French regulator sees the proliferation of augmented cameras, particularly in public spaces, as major risks for individual and collective freedoms.
The Spanish Data Protection Agency (AEPD) also raised concerns on the risk of bias in decision-making systems and the natural persons discrimination, commonly referred to as algorithmic discrimination, and the risks in relation to the social context and the collateral effects which may be derived from data processing activities that incorporate AI.
The AEPD highlighted three factors that affect the accuracy of data: errors in the systems that occur due to the implementation of the AI system, either by external elements or due to programming or design errors; errors contained within the training or validation; and the biased evolution of the AI model.
There are other business risks in relation to compliance and ethical issues. The legal and regulatory framework requires customers to ensure AI is developed in line with various law and regulations that govern data privacy and security. Compliance is challenging, as this area of law is complex and constantly evolving. If errors are made or inaccurate results are produced by the surveillance, the company implementing this surveillance could be liable for harm or damages caused by subsequent false identifications, wrongful accusations or violations of privacy rights.
The vastly uncertain ethical landscape may result in reputational damage for businesses that use AI software in surveillance. It is difficult to insist that suppliers develop their AI model in accordance with another organisation’s principles, and there is not a single set of principles that are to be used when developing software. Customers of AI solutions face the risk that the supplier hasn’t adopted a transparent approach and aligned the software with the customer’s values.
Contractual solutions to protect users of AI-powered surveillance software
In response to the risks highlighted, businesses should first ensure that the deployment of smart cameras complies with data protection regulations and consider establishing safeguards to reduce the risks for individuals.
The practical considerations for compliance include checking the legal basis for the processing and the proportionality of the processing, carrying out a data protection impact assessment, and informing individuals of their right to object. Possible safeguards include no use of biometric data, no interconnection with other processing and no automatic decision making.
It is important for customers to understand what principles the supplier’s AI model has been trained in accordance with. The customer can then assess whether such principles are sufficient and appropriate and use this as an opportunity to educate suppliers on the regulatory framework.
There are also certain contractual and drafting techniques that businesses could adopt when drafting agreements for the provision of AI surveillance software. They include:
- Setting out the customer’s needs and values in conjunction with developed principles, such as the OECD’s AI principles;
- Implementing outcomes-based warranties around bias and discrimination to ensure that the consequence of any discrimination or data protection breach will be considered as opposed to just placing an obligation on the supplier around the process and the steps to be taken;
- Reviewing provisions against emerging industry standards to ensure the supplier has trained, designed and developed it in accordance with Responsible Business Principles
- Involving data protection expertise to ensure that the storage and collection of personal data is lawful, fair and transparent and appropriate accountability is placed on the supplier;
- Setting suitable liability caps for the customer to mitigate their exposure to claims where the software produces biased results, through no fault of their own; and
- Fairly allocating the responsibility in monitoring and preventing the issues discussed by ensuring both the supplier and customer acknowledge the ways it may go wrong and implement drafting to protect against this.
Recent developments of AI legal framework in different jurisdictions