A group of some of the larger players involved in AI development and deployment–including Amazon, Google, Microsoft, and OpenAI–are joining forces with government agencies and industry groups in a new AI safety and security initiative that will develop guidelines, best practices, and other guidance in several different areas.
The group is under the umbrella of the Cloud Security Alliance (CSA) and includes participation from a number of companies as well the Cybersecurity and Infrastructure Security Agency, which has begun to deepen its focus on AI security in recent months. In addition to the corporate and government participants, the effort also includes some individual practitioners and subject matter experts on the various areas of focus within the broader effort. The CSA AI Safety Initiative already has begun preliminary work, and has established four separate working groups so far: AI technology and risk; AI governance and compliance; AI controls; and AI organizational responsibilities.
This new initiative arrives at a time when government agencies, organizations of all sizes, and security practitioners are trying to get their arms around the potential benefits and drawbacks of AI usage. Last month, CISA, in collaboration with the UK’s National Cyber Security Center, released guidelines for secure AI system development, design, deployment, and operation, and other organizations also have developed guidelines in this area. Google earlier this year released its own secure AI framework.
“Artificial intelligence (AI) systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems,” the CISA guidelines say.
“However, AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”
The CSA effort is concerned with many of the same topics, along with a focus on the ways in which the owners of the large AI models can collaborate with each other and third parties to define security and safety norms, identify potential threats, and provide guidance for users who deploy and interact with AI systems. The idea is similar to the way that the industry approached the challenge of cloud adoption and security, but with more at stake.
“We floated some ideas to industry partners and we got some outreach back based on those. We had been wanting to do work on AI for several years, but the pairing of generative AI with cloud delivery models is where we got the fuel for this idea,” said Jim Reavis, CEO of the CSA.
“In the next few months, we want to have some research completed that helps people understand generative AI from a definitional context. We have the large frontier model companies, cloud providers, SaaS providers, and we want everyone to understand the unique and overlapping capabilities they have.” Within the next year, Reavis hopes the AI Safety Initiative will have a completed AI controls framework in place that users and other stakeholders can apply and security professionals can use to understand the role that AI will play in their jobs going forward.
“We want to have security professionals understand that generative AI is a necessary foundational tool for how they’re going to be doing their jobs,” Reavis said.
Looming over the work that private industry is doing on AI safety and security is government regulation, something that is almost certainly coming, and likely sooner than later.
“This is going to be more heavily regulated than we’ve seen in cloud computing. The encouraging thing is that the companies are more welcoming. Part of that is because the stakes are higher. The’yre very thoughtful about it. Regulation is coming and it’s going to be a challenge,” Reavis said.