OpenAI, the creator of the AI chatbot ChatGPT, has teamed up with its primary investor, Microsoft, to thwart five cyberattacks carried out by various malicious actors.
According to a report published on Wednesday, Microsoft has been monitoring hacking groups associated with Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea. These groups have been exploring the use of large language models (LLMs) powered by artificial intelligence in their hacking strategies.
LLMs utilize extensive text data sets to generate human-like responses. OpenAI revealed that the five cyberattacks originated from two Chinese groups known as Charcoal Typhoon and Salmon Typhoon. Attacks linked to Iran were attributed to Crimson Sandstorm, while those connected to North Korea were attributed to Emerald Sleet. Forest Blizzard was identified as the Russian group.
These groups attempted to utilize ChatGPT-4 for various purposes, including researching company and cybersecurity tools, debugging code, creating scripts, conducting phishing campaigns, translating technical papers, evading malware detection, and studying satellite communication and radar technology. Upon detection, the accounts involved in these activities were deactivated.
OpenAI disclosed this discovery while announcing a ban on state-backed hacking groups from using its AI products. Although the company successfully prevented these attacks, it acknowledged the challenge of completely eradicating every malicious use of its programs.
In response to a surge in AI-generated deepfakes and scams following the launch of ChatGPT, policymakers have increased their scrutiny of developers working on generative AI. In June 2023, OpenAI introduced a $1 million cybersecurity grant program to enhance and evaluate the impact of AI-driven cybersecurity technologies.
Despite OpenAI’s efforts in cybersecurity and the implementation of safeguards to prevent ChatGPT from generating harmful or inappropriate responses, hackers have managed to bypass these measures and manipulate the chatbot to produce such content.
More than 200 entities, including OpenAI, Microsoft, Anthropic, and Google, have recently collaborated with the Biden Administration to establish the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). These groups were formed in response to President Joe Biden’s executive order on AI safety issued in late October 2023. The order aims to promote the safe development of AI, combat AI-generated deepfakes, and address cybersecurity concerns.
Magazine: ChatGPT trigger happy with nukes, SEGA’s 80s AI, TAO up 90%: AI Eye