Security news that informs and inspires

DHS Charts Out AI Security Strategy

By

As part of a broad roadmap for its initiatives around AI, the Department of Homeland Security (DHS) on Monday outlined the steps it is taking to help manage the security risks that exist in AI models, systems and use cases.

The DHS this week showcased how it is expanding its usage of AI-related technologies across various operations, including uses by the Federal Emergency Management Agency (FEMA) to more efficiently assess natural disaster impacts on buildings, and by law enforcement officers to investigate crimes. However, across these use cases the DHS stressed that cybersecurity is a significant challenge and it plans to work to identify the security risks that are associated with these technologies. While the DHS has used AI for well over a decade, the popularity of generative AI over the past year and ensuing executive order on AI from the White House lit a fire under the DHS and many other federal agencies across the U.S. government to take a closer look at the benefits and risks of this technology.

“Of particular concern are impacts of AI attacks on critical infrastructures, which could result in nefarious actors disrupting or denying activities related to Internet of Things (IoT) technologies or networked industrial systems,” according to the DHS in its roadmap. “Generating and passing poisoned data into a critical sensor could trigger downstream impacts, such as service disruptions or system shut-offs.”

The DHS plans to address these issues by creating a number of independent evaluation processes for AI systems used by the DHS, which will include a test facility that will look at pilots, algorithm training and use cases. It also plans to hold a HackDHS for AI Systems assessment where vetted researchers will be asked to hunt for security flaws in DHS systems that leverage AI. On the defense side, the DHS also said it plans to evaluate AI-enabled vulnerability discovery and remediation tactics that can be used for federal civilian government systems.

In an effort to provide guidance around securing AI systems at a broader level, the DHS is also directing CISA to work with NIST, the NSA and the FBI to create and publish "actionable risk management guidance" for critical infrastructure owners, as well as data scientists, developers and decision makers. These recommendations will help these groups make informed decisions about both the development and deployment of AI systems. CISA will also provide the Office of Management and Budget (OMB) with suggestions for the external testing for AI for federal agencies, and work to develop best practices for cybersecurity red teaming of AI systems, said the DHS.

Many of these measures fall in line with the strategy outlined in Biden’s executive order from October, which in part set the stage for developing and deploying “responsible AI” and directed a number of actions from various federal government agencies to look at the security risks of AI systems.

However, efforts from the DHS and U.S. government as a whole still face an array of challenges when it comes to better understanding and improving the security of AI models and systems. Part of the difficulty in understanding AI security risks is that there are different pain points across the development stages for this technology, including the AI models themselves and the use cases that operate using AI. The foundational large language models (LLMs) behind AI, for instance, contain many inherent risks, including opaque architectures. Concerns over these types of challenges have caused experts to call for regulatory measures, as opposed to voluntary guidelines, from the government.

In addition to improving “AI safety and security,” the DHS also outlined goals around protecting individuals’ privacy, civil rights, and civil liberties and creating strong partnerships around AI. The AI Safety and Security Board, established by the DHS, is one such partnership effort for helping to better understand the security risks of AI systems. The board will bring together AI experts from the private sector, civil society, academia and government entities to give advice to the critical infrastructure community and the broader public on security best practices for the development and deployment of AI.

“This Board will bring together preeminent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government,” according to the DHS. “This Board will issue recommendations and best practices for an array of AI use cases to ensure AI deployments are secure and resilient.”