The AI Safety Institute of the United Kingdom is expanding its reach internationally with the establishment of a new office in the United States. The UK Technology Secretary, Michelle Donelan, announced on May 20 that the institute will be opening its first overseas location in San Francisco during the summer. The choice of San Francisco as the strategic location for the office is aimed at tapping into the vast pool of tech talent available in the Bay Area, as well as engaging with one of the world’s largest AI labs, which is situated between London and San Francisco. The move is also intended to strengthen relationships with key players in the US and advocate for global AI safety in the public interest.
The London branch of the AI Safety Institute already has a team of 30 experts and is on track to expand further, particularly in the area of risk assessment for advanced AI models. Donelan emphasized that this expansion showcases the UK’s leadership and commitment to AI safety. This development follows the successful AI Safety Summit held in London in November 2023, which was the first-ever global summit dedicated to AI safety and attracted leaders from around the world, including the US and China, as well as prominent figures in the AI industry such as Brad Smith, Sam Altman, Demis Hassabiss, and Elon Musk.
In addition to the office expansion, the UK announced that it will release a selection of the institute’s recent safety testing results on five publicly available advanced AI models. The models were anonymized, and the results provide a snapshot of their capabilities without categorizing them as either “safe” or “unsafe”. The findings revealed that some models demonstrated proficiency in completing cybersecurity challenges, while others struggled with more advanced tasks. Furthermore, several models exhibited a high level of expertise in chemistry and biology, equivalent to a PhD-level knowledge. However, all tested models were deemed “highly vulnerable” to basic jailbreaks and incapable of completing complex tasks without human supervision.
Ian Hogearth, the chair of the institute, stated that these assessments contribute to an empirical evaluation of model capabilities. The UK’s AI Safety Institute is taking proactive measures to prevent an AI apocalypse by encouraging the safe development and deployment of AI technology.