Cloud computing providers to inform government about foreign company use of resources

The Biden administration is set to utilize the Defense Production Act to compel major tech companies, including OpenAI, Google, and Amazon, to notify the US government whenever they employ substantial computing power to train new AI models. This move is aimed at granting the government access to crucial information about potentially sensitive AI projects and safety testing carried out by these companies.

The task of receiving and reviewing this data will be delegated to the Commerce Department, with implementation of the requirement potentially imminent, pending further details. This requirement forms part of a comprehensive executive order issued in October 2023, which seeks to regulate AI development and enforce safety standards. In addition, cloud computing providers will be obligated to inform the government when foreign entities utilize their resources for training extensive language models.

Support for this requirement has been voiced by experts who emphasize the necessity for oversight and regulation in the realm of AI development. The National Institutes of Standards and Technology (NIST) is currently engaged in establishing safety standards for AI models, as well as developing an AI Safety Institute. However, the NIST may encounter challenges in meeting its deadline of July 26, due to limitations in funding and expertise.

Biden Administration’s Plan to Inform Government

The Biden administration is taking a proactive approach to ensure transparency and oversight in the development of artificial intelligence (AI) by implementing a plan that utilizes the Defense Production Act. This plan includes the requirement for tech companies, such as OpenAI, Google, and Amazon, to inform the US government whenever they train new AI models using significant computing power.

The primary objective of this requirement is to grant the government access to critical information regarding sensitive AI projects and safety testing conducted by these companies. By being aware of the AI projects being developed, the government can effectively monitor any potential risks or ethical concerns that may arise.

To handle the oversight and management of the information provided by these tech companies, the responsibility falls to the Commerce Department. They will be tasked with receiving and reviewing the details disclosed by companies like OpenAI, Google, and Amazon. This collaborative effort between the government and the tech industry aims to strike a balance between fostering innovation and ensuring the responsible development of AI technology.

While the specifics of the implementation date are yet to be announced, there are expectations that the requirement may take effect as soon as next week. The precise details and guidelines will provide further clarity on how tech companies should comply with this mandate. By establishing clear protocols and procedures, the government can effectively gather vital information while respecting the privacy and intellectual property rights of these tech companies.

Broad Executive Order on AI Development

In October 2023, the Biden administration issued a broad executive order that encompasses various aspects of AI development. This executive order seeks to regulate the development of AI technology and ensure the establishment of safety standards. The order acknowledges the transformative potential of AI while emphasizing the importance of ethical considerations and public safety.

The Biden administration recognizes the need for a comprehensive regulatory framework to guide AI development and address any potential risks associated with its deployment. By placing a strong emphasis on safety standards, the government aims to foster public trust in AI technology and promote responsible innovation.

Ensuring the safety of AI is of utmost importance, given its potential impact on society and various industries. The executive order emphasizes the need for the development of safety standards that address the ethical use of AI, data privacy concerns, and potential biases that may emerge from AI algorithms. This approach enables the government to actively participate in shaping AI development, ensuring it aligns with public interest and societal values.

Requirement for Cloud Computing Providers

In addition to the Defense Production Act requirement for tech companies, the Biden administration has also extended its focus to cloud computing providers. Under this new requirement, cloud computing providers will be required to inform the government when foreign companies utilize their resources to train large language models.

By being aware of foreign companies’ usage of cloud computing resources, the government can gain insight into AI development on a global scale. This information can potentially highlight any emerging trends, technology transfers, or cybersecurity concerns associated with foreign AI projects. The requirement aims to foster a transparent and accountable environment in which the government can actively monitor and address any potential risks arising from these collaborations.

The mandate also emphasizes the importance of training large language models, which play a crucial role in natural language processing and AI-driven applications. By staying informed about the training of these models, the government can better understand the capabilities and potential applications of AI technology.

Experts in the field have expressed support for this requirement, arguing that oversight and regulation are necessary to navigate the complexities and potential implications of AI development. By actively involving cloud computing providers in sharing information, the government can leverage their expertise and establish a collaborative approach to AI governance.

Involvement of National Institutes of Standards and Technology (NIST)

To ensure the establishment of robust safety standards for AI models, the Biden administration has enlisted the expertise of the National Institutes of Standards and Technology (NIST). The NIST has been tasked with defining these safety standards and will play a crucial role in shaping the ethical considerations and guidelines surrounding AI development.

The involvement of NIST in defining safety standards for AI models demonstrates the commitment of the Biden administration to uphold ethical practices and prevent the misuse of AI technology. By leveraging the expertise of the NIST, the government aims to establish a comprehensive framework that safeguards against potential risks and biases associated with AI algorithms.

To complement the efforts of defining safety standards, the creation of an AI Safety Institute is also in progress. This institute will serve as a hub for research, development, and dissemination of best practices related to AI safety. By consolidating knowledge and expertise, the AI Safety Institute will contribute to the responsible development and deployment of AI technology.

While the NIST has until July 26 to establish these safety standards, there may be potential challenges that they face in meeting this deadline. Funding and resource limitations, as well as the complexity of the subject matter, are factors that may impact the timeline for defining these standards. Nevertheless, the involvement of the NIST signifies the government’s commitment to addressing the complex ethical questions surrounding AI technology.

In conclusion, the Biden administration’s plan to inform the government about AI development demonstrates a commitment to transparency, oversight, and ethical considerations in the field of artificial intelligence. By utilizing the Defense Production Act, involving tech companies and cloud computing providers, and collaborating with the NIST, the government aims to strike a balance between promoting innovation and ensuring the responsible use of AI technology. Through the establishment of safety standards, the Biden administration has taken significant steps to address the potential risks and challenges associated with AI development, thereby fostering public trust and confidence in this transformative technology.

Related site – Eying China, US proposes ‘know your customer’ cloud computing requirements

CMA investigates Vodafone and Three UK’s proposed merger

Scroll to Top