Executive Branch Continues to Take Steps to Regulate AI in Absence of Federal Legislation: Commerce Proposes New Mandatory AI Reporting Requirements

On September 11, 2024, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) published a proposed rule that would create a mandatory quarterly reporting requirement for U.S. persons and U.S. entities that develop, acquire, or possess advanced artificial intelligence (AI) models and computing clusters (Proposed Rule). This Proposed Rule fulfills the Department of Commerce’s mandate in Section 4.2 of the October 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI Executive Order) to collect information on dual-use foundation models and large-scale computing clusters. BIS contends that the information that will be obtained from the proposed industry reports, which it is authorized to collect under the Defense Production Act (DPA), is essential for the government to continuously evaluate the AI capabilities of the Defense Industrial Base (DIB) and safeguard America’s national defense.

Background

To protect U.S. national security and ensure the U.S. government can properly assess whether U.S. entities are developing AI in a safe and reliable manner, the AI Executive Order directed BIS to collect information from U.S. entities on an ongoing basis concerning their use, possession, and development of “dual-use foundation models” and large-scale computing clusters. A “dual-use foundation model” is an AI model that is: (a) trained on broad data; (b) generally uses self-supervision; (c) contains at least tens of billions of parameters; (d) is applicable across a wide range of contexts; and (e) exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to U.S. national security, economic security, or U.S. public health or safety, or combination of any of these matters. Like other dual-use technologies, dual-use AI foundation models have military applications. They can, among other things, be used to enhance the maneuverability, accuracy, and efficiency of military equipment and enhance signal intelligence devices. Accordingly, in the hands of U.S. adversaries dual-use foundation models could “substantially lower the barrier of entry for non-experts” to develop weapons of mass destruction as well as automated cybersecurity vulnerability detection, exploitation, and cyber warfare.

Proposed Reporting Requirements

U.S. organizations and U.S. persons would be required to submit quarterly reports and answer questions from BIS if they engage in, or plan to engage in, activities that involve:

  • AI model training using more than 10^26 computational operations; or
  • The acquisition, development, or possession of computing clusters networked at 300 Gbit/s or faster, capable of at least 10^20 computational operations per second for AI training.

BIS further indicated that it intended to seek additional information from reporting entities/persons concerning several topics, including:

  • Any ongoing or planned training, development, or production of dual-use foundation models, including security protections taken to assure the integrity of the training process against sophisticated threats;
  • The ownership and possession of the “model weights” of dual-use foundation models, and the physical and cybersecurity measures designed to protect them;
  • Results of any dual-use foundation model’s red-team performance testing to identify flaws and vulnerabilities; and
  • Any other information pertaining to the safety, reliability, or national security risks relating to dual-use foundation models.

Any entity would be required to respond to BIS’s questions within 30 calendar days of receiving its request.

Key Takeaways & Possible Future BIS Actions

If implemented, the Proposed Rule would require detailed reporting requirements concerning highly sensitive and proprietary data of U.S. companies. The U.S. government’s collection and storage of this data raises significant security risks. How will the government protect such data against foreign state actors, especially the People’s Republic of China, in light of the recent revelations about stealthy cyberattacks orchestrated by Salt Typhoon and Volt Typhoon?

Information gathered from these industry surveys may guide BIS in the development of future export controls and restrictions. Indeed, on October 14, 2024 several news outlets reported that U.S. officials are considering capping sales of advanced AI chips from Nvidia and other U.S. companies to certain countries. Since 2022, BIS have been increasing export restrictions on high performance AI chips.

The comment period for this proposed rule expired on October 11, 2024 and only 50 comments were submitted. Thus, we should expect BIS to issue a final rule in the coming months.

Although BIS expects that this Proposed Rule would only impact a small number of “well-resourced technology companies,” the failure to comply carries the risk of civil and criminal penalties. The Proposed Rule and AI Executive EO demonstrate the U.S. Government’s recognition of the significant safety and national security risks posed by advanced AI models. The Proposed Rule intends to ensure the safe and reliable development of AI systems while also counteracting dangerous capabilities and the implementation of adequate safeguards to prevent the theft or misuse of dual-use foundation models by foreign adversaries.


See BIS Proposed Rule, “Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters,” 89 Fed. Reg. 73612 (Sep. 11, 2024).

See Executive Order 14110, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House.

See Proposed Rule at 89 Fed. Reg. 73617.

The Proposed Rule defines “model weights” as “the numerical parameters used in the layers of a neural network.”