Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

US cloud providers to face new AI cyber risk duties


US cloud providers are to face new legal obligations to help US authorities identify artificial intelligence (AI) systems being developed by foreign organisations via their cloud-based infrastructure services that could be used in “malicious cyber-enabled activity”.

The regulations are to be drafted within the 90 days in accordance with measures outlined by US president Joe Biden in a new executive order he has issued on safe, secure and trustworthy AI.

The executive order is multi-faceted.

It commits the US government to enforcing existing legislation and to enacting “appropriate safeguards” to address harms arising from use of AI, such as fraud, unintended bias, discrimination, and infringements on privacy. It also provides for the development of new guidelines and standards to support the development and deployment of safe, secure, and trustworthy AI systems, as well as separate guidelines that will require AI-generated content to be clearly labelled.

The order promotes responsible innovation, competition and collaboration on AI, and commits to giving US workers a voice in ensuring AI tools “support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation”.

The order also paves the way for potential changes to guidelines on patent eligibility and to copyright rules to account for AI, as well as changes to US immigration requirements to encourage workers to come to the US to work in AI-related roles.

Cyber risk posed by advances in AI is another topic addressed in the order – with fresh requirements set to follow for US infrastructure-as-a-service providers.

Biden said: “I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities … to address the use of United States Infrastructure as a Service (IaaS) products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors.”

Under the order, new regulations are to be proposed within 90 days that would require US IaaS providers to report to the secretary of commerce “when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity”.

The reporting requirements are to be set by the secretary of commerce, but as a minimum will include a duty on the providers to disclose the name of the foreign person and “the existence of any training run of an AI model meeting the criteria” that is outlined in the order and the future regulations.

The IaaS providers will also be obliged to require any foreign resellers of their products to also submit such reports to the secretary of commerce – and prohibit those foreign resellers from using their products until they do so.

Further regulations are to be proposed within 180 days of the executive order that would require US IaaS providers to “ensure that foreign resellers of United States IaaS products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller”. The regulations will set minimum standards on the opening of such accounts, including the types of documents that will be needed for identity verification, and in relation to record-keeping duties covering information such as the names and contact details for the foreign persons, the source and means of payment they use, and their IP address.

Cybersecurity adviser Regina Bluman, of Pinsent Masons’ cyber professional services team, said: “These proposals may be considered unpalatable by many US cloud providers amidst continued scrutiny, particularly in Europe, of the extent to which US authorities can access data stored by US technology providers. Many organisations have entirely valid reasons for seeking to keep secret new products and services they are working on – not least to protect their intellectual property rights with a view to commercialisation.”

“Numerous studies have found that the US and China lead as the two largest sources of cyber crime globally. Assuming most Chinese persons do not use American infrastructure anyway, it means almost 40% of the source of cyber crime globally will fall outside the scope of Biden’s order, severely limiting the impact it might have. There are also practical questions over the extent to which the reporting requirements could be evaded and meaningfully enforced. So, while the development and use of AI tools and models warrants carefully considered regulation, there are probably other steps which could be taken if security was truly the goal – the measures as proposed appear more focused on achieving US protectionism.”

Publication of the new executive order came ahead of the AI safety summit being hosted by the UK government on Wednesday 1 and Thursday 2 November, where a focus is on building understanding of and consensus for action around the risks posed by ‘frontier AI’, which the UK government has defined as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.