Security news that informs and inspires

White House Implements AI Safety Reporting Mandate

By

Under the now-live White House executive order requirement, developers of the “most powerful AI systems” to report “vital information” related to cybersecurity measures, training plans and more.

The White House said it has made headway on several pieces of its AI executive order, including a key component requiring developers of the “most powerful AI systems” to report “vital information” related to cybersecurity measures, training plans and more.

On Monday, the White House AI Council is convening to discuss these updates from the Biden administration, which were also released three months after the executive order was announced in October as a way to set the stage for developing and deploying “responsible AI.”

“The Executive Order directed a sweeping range of actions within 90 days to address some of AI’s biggest threats to safety and security,” according to the Biden administration on Monday. “These included setting key disclosure requirements for developers of the most powerful systems, assessing AI’s risks for critical infrastructure, and hindering foreign actors’ efforts to develop AI for harmful purposes.”

The White House is specifically using Defense Production Act authorities to require system developers of “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” to report safety test results to the Department of Commerce.

According to the executive order, this information includes any “ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats” and “the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.” AI safety tests results may also include the discovery of software vulnerabilities and development of associated exploits or the use of software or tools to influence events.

“These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems,” according to the White House.

The Defense Production Act is a Cold War-era law granting emergency authority to presidents for expediting materials and services needed for national defense; it has previously been leveraged by presidential administrations for resources related to the COVID-19 pandemic, cyberespionage and more.

The White House in its executive order ordered the National Institute of Standards and Technology (NIST) to develop a framework for assessing safety measures, “relevant AI red-team testing” and more. However, the deadline for these NIST frameworks is not until July 2024.

In another AI executive order update, The Department of Commerce has also proposed a draft rule where U.S. cloud companies would need to report if they are providing computing power for foreign AI training for “the most powerful models, which could be used for malign activity.” As part of the proposed rule, providers would be required to report “instances of training runs by foreign persons for large AI models with potential capabilities that could be used in malicious cyber-enabled activity.”

Finally, under the executive order, nine agencies have also submitted their assessments of the risks that the use of AI could pose in critical infrastructure sectors. These agencies include the Department of Defense, Department of Transportation and Department of Treasury.

“These assessments, which will be the basis for continued federal action, ensure that the United States is ahead of the curve in integrating AI safely into vital aspects of society, such as the electric grid,” according to the White House.

As the development of AI systems continues to gain traction, all eyes are on regulatory efforts related to these systems, particularly as it relates to cybersecurity concerns. There are many security concerns with the models underpinning AI systems - including the opacity of the models' architecture and the potential for the data being input to be polluted - and the long-term impact of the executive order, while it focuses on AI system development, has yet to be seen as it relates to some of these inherent challenges.

The executive order also outlined a number of other cybersecurity initiatives around AI, including an order to develop AI tools that would find and fix flaws in critical software. Here, the Department of Homeland Security said it would partner with the Department of Defense in order to create a pilot program aimed at developing an AI capability for fixing vulnerabilities in critical U.S. government networks. As the DHS and Secretary of Defense were given 270 days to report on the results of this pilot program, this initiative was not included in the White House’s 90-day updates.