The White House has revealed its inaugural comprehensive strategy for managing the potential risks associated with artificial intelligence (AI). The policy requires government agencies to enhance reporting on AI usage and address any potential risks posed by the technology.
According to a memorandum issued by the White House on March 28, federal agencies are required to appoint a chief AI officer within 60 days, disclose their use of AI, and implement protective measures.
This directive is in line with President Joe Biden’s executive order on AI from October 2023. During a teleconference with reporters, Vice President Kamala Harris stated that this new regulation, initiated by the Office of Management and Budget (OMB), aims to provide guidance to the entire federal government on the safe and efficient utilization of AI as it continues to rapidly expand.
While the government seeks to harness the potential of AI, the Biden administration remains cautious about the evolving risks associated with the technology.
According to the memorandum, certain AI use cases, particularly those within the Department of Defense, will not be required to be disclosed in the inventory. This is because sharing such information would contradict existing laws and government-wide policies.
By December 1, government agencies must establish specific safeguards for AI applications that could potentially impact the rights or safety of Americans. For example, travelers should have the option to opt out of facial recognition technology used by the Transportation Security Administration at airports.
Agencies that are unable to implement these safeguards must cease using the AI system, unless agency leadership can justify how continuing to use it would not heighten risks to safety or rights, or hinder critical agency operations.
The recent AI directives from the OMB are in line with the Biden administration’s blueprint for an “AI Bill of Rights” from October 2022, as well as the AI Risk Management Framework from the National Institute of Standards and Technology in January 2023. These initiatives highlight the importance of developing reliable AI systems.
The OMB is also seeking input on how to enforce compliance and best practices among government contractors supplying AI technology. It aims to ensure that agencies’ AI contracts align with its policy later in 2024.
Furthermore, the administration has announced its intention to recruit 100 AI professionals into the government by the summer, as outlined in the “talent surge” of the October executive order.
Magazine: Real AI use cases in crypto: Crypto-based AI markets and AI financial analysis.