The AI Safety Institute Consortium (AISIC) has been established by the United States Department of Commerce, with participants from various sectors of the tech industry. The consortium, consisting of over 200 members, including major players like Microsoft, Google, Meta, and Apple, aims to bring together AI creators, users, academics, government and industry researchers, and civil society organizations to promote the development of safe and reliable artificial intelligence. AISIC will be responsible for creating guidelines for red-teaming, evaluating AI capabilities, risk management, safety and security, and watermarking synthetic content. This initiative is part of President Joe Biden’s executive order on AI safety, which aims to ensure that the US remains a leader in the responsible development and deployment of AI. The consortium will collaborate with state and local governments, nonprofits, and organizations from other nations to establish industry standards. The establishment of AISIC follows the creation of the US AI Safety Institute (USAISI) in late October 2023. The White House AI Council, convened by Bruce Reed, the White House deputy chief of staff, has reported on the progress made in implementing the executive order, demonstrating that the US has met or exceeded the requirements set for the first three months.