Ever since the launch of ChatGPT in late 2022, artificial intelligence (AI) has become a mainstream phenomenon, with both tech and non-tech companies racing to develop their own AI assistants. These AI assistants have taken on various roles in our lives, serving as business consultants, marriage counselors, advisors, therapists, and confidants. We trust them with our personal and private information, believing that our data is protected.
However, recent research conducted by scholars at the University of Ber-Gurion has raised concerns about the security of AI assistants. They discovered a vulnerability in the design of major platforms such as Microsoft’s Copilot and OpenAI’s ChatGPT-4, which allows attackers to decipher encrypted AI assistant responses with surprising accuracy. This means that our secrets and sensitive information may not be as protected as we think.
Furthermore, the researchers found that once an attacker has developed a tool to decipher conversations with one AI assistant, they can easily apply the same tool to other services without much effort. This raises the possibility of widespread hacking and exposure of personal and professional information.
This is not the first time security flaws in AI assistants have been discovered. In 2023, researchers from U.S. universities and Google DeepMind found that ChatGPT could be prompted to repeat certain words and spew out memorized portions of its training data. This includes sensitive information such as Bitcoin addresses, programming codes, and more.
The problem is even more severe with open-source models. A recent study demonstrated how attackers could compromise Hugging Face’s conversion service and gain unauthorized access to submitted models. This could lead to the implantation of malicious models or unauthorized access to private datasets.
The implications of these security vulnerabilities are significant. Organizations like Microsoft and Google, which have numerous models hosted on Hugging Face, were found to be at risk of attacks and compromise. The more power we give to AI assistants, the more vulnerable we become to such attacks.
Bill Gates, in a blog post, described the potential of an overarching AI assistant that has access to all our devices and information. While this might be exciting, it also means that if such an AI assistant is attacked, our entire lives could be hijacked, including the information of those connected to us.
To protect ourselves, some measures have been taken. The U.S. House of Representatives has banned the use of Microsoft’s Copilot by congressional staffers due to the threat of data leakage. Technology companies and financial institutions have also banned the use of AI bots by their employees. However, these actions are not enough.
Major technology companies like OpenAI and Microsoft have pledged to adhere to responsible AI, but more substantial actions are needed. Regulators and policy makers should demand stronger security measures, and in the meantime, we should be cautious about sharing sensitive information.
Perhaps if we collectively stop using these AI assistants until stronger security measures are implemented, we can force companies and developers to take the necessary actions to protect our privacy and data.
Dr. Merav Ozair, a guest author for Cointelegraph, emphasizes the importance of demanding action from regulators and policy makers. She advises refraining from sharing sensitive information until proper security measures are in place.
Disclaimer: This article is for general information purposes only and should not be taken as legal or investment advice. The views expressed are solely those of the author and do not necessarily reflect the opinions of Cointelegraph.