AI SECURITY: Development of AI must be developed, deployed, and operated in a secure and responsible way, the agreement says. (Foto: Istock)

US, UK, Japan, and other major powers sign AI security accord

The agreement, which excludes China, includes recommendations for monitoring AI systems for abuse, protecting data, and vetting software suppliers.

Twenty-two law enforcement and intelligence agencies from 18 different countries signed an international agreement on AI safety over the weekend, which is designed to make new versions of the technology “secure by design.”

This agreement comes months after the European Union signed its EU AI Act in June, banning certain AI technologies including biometric surveillance and predictive policing, and classifying AI systems that could significantly impact health, safety, rights, or elections as high risk.

“AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realized, it must be developed, deployed, and operated in a secure and responsible way,” the latest agreement stated.

The agreement emphasized that with the rapid pace of AI development, security must not be an afterthought but rather a core requirement integrated throughout the life cycle of AI systems.

New security vulnerabilities

“AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats,” the report said. “When the pace of development is high — as is the case with AI — security can often be a secondary consideration.”

One way that AI-specific security is different is the existence of a phenomenon called “adversarial machine learning.”

Called a critical concern in the developing field of AI security by the report, adversarial machine learning is defined as the strategic exploitation of fundamental vulnerabilities inherent in machine learning components.

By manipulating these elements, adversaries can potentially disrupt or deceive AI systems, leading to erroneous outcomes or compromised functionality.

Aside from the EU’s AI bill, in the US, President Joe Biden signed an executive order in October to regulate AI development, requiring developers of powerful AI models to share safety results and critical information with the government.

China is not a signatory

The agreement was signed by government agencies from Australia, Canada, Chile, the Czech Republic, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea, and Singapore in addition to the UK and the US. Absent from the agreement was China, a powerhouse of AI development, and the target of several trade sanctions from the US to limit its access to the high-powered silicon required for AI development.

In a speech at a chamber of commerce event in Taiwan on Sunday, TSMC’s chairman Mark Liu argued that the US move to exclude China will lead to a global slowdown in innovation and a fragmentation of globalization.

AI remains a legal minefield

The agreement, while nonbinding, primarily offers general recommendations and does not address complex issues regarding the proper applications of AI or the methods of data collection for AI data models.

It does not touch on the ongoing civil litigation within the US over how AI models ingest data to grow their large language models, and if this is compliant with copyright law.

Within the US, several authors are suing OpenAI and Microsoft, alleging copyright infringement and intellectual property violations for using their creative works in training OpenAI’s ChatGPT, highlighting growing concerns about AI's impact on traditional creative and journalistic industries.

According to K&L Gates, OpenAI and other defendants in these cases are leveraging defenses like lack of standing and fair use doctrine, with courts skeptically approaching early cases, making the future of AI litigation “uncertain.”

Oppovertekst

AI agreement

When the pace of development is high, security can often be a secondary consideration

Text from the agreement