The field of artificial intelligence (AI) has experienced tremendous growth in recent years, and experts predict that by 2030, it could contribute up to $15.7 trillion to the global economy. This figure surpasses the combined economic outputs of major players like China and India. One notable development in this space is the rise of “deepfakes” – highly realistic video and audio recordings created using AI that can mimic real human appearances and voices so well that they are often indistinguishable from genuine ones.
Recently, there has been a viral post showcasing how some individuals are using readily available open-source and commercial software to alter selfies with generative AI tools. This allows them to create counterfeit ID images that can potentially deceive today’s security checks.
The existence of AI-generated deepfakes poses significant challenges to the existing Know Your Customer (KYC) paradigm. Toufi Saliba, CEO of HyperCycle, a company that enables AI microservices to communicate with each other, believes that a major challenge lies in the security processes themselves. He emphasizes that the issue of fake image creation could disrupt centralized systems and highlights the potential role of cryptography in addressing this challenge.
Dimitry Mihaylov, an AI research expert for the United Nations, suggests that industries need to evolve rapidly to address the challenges posed by deepfakes. He anticipates a shift in regulatory approaches to KYC, with more dynamic and interactive verification processes becoming the norm, such as video KYC.
The impact of deepfakes extends to various industries, including the cryptocurrency sector. OnlyFake, a platform that produces counterfeit driver licenses and passports for multiple nations, has allegedly bypassed the KYC protocols of several well-known cryptocurrency trading platforms. Discussions have leaked, revealing that OnlyFake’s services have been used to skirt verification processes at numerous other cryptocurrency exchanges and financial institutions.
Generating a counterfeit document on OnlyFake is reportedly quick and easy, with the platform capable of generating up to 100 fake IDs at once. The platform offers users the option to incorporate their own photo or select one from a curated “personal library of drops.” The fake IDs are designed to mimic typical presentation for online verifications, with various domestic surfaces as backgrounds.
In 2022, CertiK, a blockchain security company, uncovered an underground marketplace where people offered their identities for sale. These individuals agreed to serve as the legitimate face for fraudulent cryptocurrency initiatives and to establish banking and exchange accounts for others who would otherwise be barred from certain platforms.
The widespread availability of AI deepfake technology has raised concerns in the cryptocurrency sector, particularly regarding the reliability of video verification processes used for identity validations. Binance chief security officer Jimmy Su has expressed concerns about fraudsters using deepfake technology to bypass exchange KYC procedures. He warns that these video forgeries are becoming so realistic that they can deceive human evaluators.
A study by Sensity AI revealed that liveness tests used for identity verification are highly vulnerable to deepfake attacks. Scammers can substitute their own faces with those of other individuals, enabling them to deceive verification systems. Examples of deepfake-related scams include a man from India being tricked out of money by a scammer pretending to be his friend, and a deepfake video of Elon Musk spreading misleading crypto investment advice circulating on Twitter.
As we move into a future driven by AI, the threat of “face swap” deepfake attacks is expected to continue rising. Attacks against remote identity verification systems have already increased by 704% between 2022 and 2023, primarily due to the accessibility of free and low-cost face swap tools. Hackers and scammers are becoming more sophisticated with their attacks, utilizing digital injection attack vectors and emulators to create and use deepfakes in real-time, posing a serious challenge to mobile and video authentication systems. The evolution of security paradigms in response to these challenges will be crucial, considering the growing reliance on advanced technologies.