In recent years, there has been a significant increase in calls for the censorship of free speech in order to protect individuals from misinformation. However, it is crucial to allow citizens to judge the truth for themselves. Although there is no perfect solution, better platforms could provide tools to assist in this process.
Calls for government-enforced censorship have come from various sources globally. The European Union (EU) has implemented social media censorship through its Digital Services Act. Both Brazil and the EU have threatened censorship against certain entities for failing to suppress unfavorable political voices. In the United States, a Supreme Court ruling allowed the government to pressure tech companies to remove misinformation. Mark Zuckerberg, the CEO of Facebook, expressed regret for succumbing to pressure from the White House during the pandemic. Tim Walz, a politician, stated that there is no guarantee of free speech when it comes to misinformation.
Misinformation online is a genuine problem, but it is not a new phenomenon, and it is unclear whether people are more susceptible to falsehoods than in the past. For example, twenty years ago, the justification for the Iraq War was based on claims of mass destruction that were later discredited. During the “Satanic Panic” in the 1980s, an investigation into over 12,000 reports failed to find any evidence of satanic cults abusing children. In the 1950s, McCarthyism created fear of communism by making baseless claims about known communists in the State Department. Furthermore, witch hunts were conducted not too long ago and still persist today.
The danger of misinformation today lies in the ability of bad actors, using AI technology, to deliberately promote false information. Coordinated fake or incentivized accounts create the illusion of consensus and make fringe ideas seem mainstream. Popular social media platforms are closed ecosystems, making it difficult to assess the reputation of sources or the origin of claims. We are limited to the information that platforms choose to measure and expose, such as followers, likes, and “verified” status. As AI technology advances, hyper-realistic synthetic media undermine our trust in raw content, including audio, video, images, screenshots, and documents that we typically rely on as evidence.
Politicians themselves are not more trustworthy than the information they seek to censor. Public trust in government is at historic lows. Many censorship efforts have targeted information that turned out to be true, while government-backed narratives have been repeatedly discredited. The intelligence apparatus, which warned us about election disinformation, suppressed and mislabeled the Hunter Biden laptop story as “Russian disinformation” in the past. During the pandemic, legitimate scientific debates about COVID’s origins and public health measures were silenced, while officials promoted claims about masks, transmission, and vaccines that they later had to retract. Elon Musk and Mark Zuckerberg have both revealed the extent of government pressure on social platforms to suppress certain voices and viewpoints, often targeting legitimate speech rather than actual misinformation. Our leaders have proven themselves unfit to be the arbiters of truth.
The underlying problem is a lack of trust. Citizens have lost faith in institutions, traditional media, and politicians. Content platforms like Google, Facebook, YouTube, TikTok, and others are constantly accused of political bias. Even if these platforms were completely impartial in moderating content, their lack of transparency would still breed conspiracy theories and claims of bias and shadowbanning.
Fortunately, blockchains offer a solution. They provide open and verifiable systems that do not require trust in centralized authorities. Every account has a transparent history and quantifiable reputation. The source of every piece of content can be traced, and every edit is permanently recorded. No central authority can be pressured to manipulate results or selectively enforce rules. In the lead-up to the US election, Polymarket, a blockchain-based, transparent, and verifiable prediction market, emerged as a reliable source of election forecasts, gaining trust from an electorate that was losing faith in traditional pollsters. Transparency and verifiability create a foundation of truth from which we can rebuild social trust.
Blockchain technology enables powerful verification mechanisms. Tools like WorldCoin allow users to prove their uniqueness as human beings, and similar technology can verify attributes such as residence, citizenship, or professional credentials. Zero-knowledge proofs can verify these attributes without revealing personal data. Such technologies can provide meaningful information about individuals and groups participating in online discourse, such as whether they are human, their geographical location, and their credentials, all while preserving users’ privacy.
For example, users seeking medical advice could filter their search to verified doctors, and non-citizens could be ignored in domestic policy debates. Verified members of various armed forces could be the only ones considered in discussions about wartime disinformation. Politicians could focus on verified constituents to avoid being influenced by well-organized fringe groups or foreign actors. AI-powered analysis could uncover authentic patterns across verifiable groups, revealing variations in perspectives between experts and the public, citizens and global observers, or any other meaningful segments.
Cryptographic verification extends beyond blockchain transactions. The Content Authenticity Initiative, a coalition founded by Adobe, The New York Times, and Twitter, is developing protocols that act as digital notaries for cameras and content creation. These protocols cryptographically sign digital content at the moment of capture, embedding secure metadata about the creator, the capturing device, and any modifications made. This combination of cryptographic signatures and provenance metadata enables verifiable authenticity that anyone can inspect. For example, a video could contain cryptographic proof of where and when it was taken on a specific user’s device.
Furthermore, open protocols allow third parties to build tools that users need to evaluate truth and control their online experience. Protocols like Farcaster already enable users to choose their preferred interfaces and moderation approaches. Third parties can develop reputation systems, fact-checking services, content filters, and analysis tools using the same verified data. Instead of being restricted to black box algorithms and centralized moderation, users have real tools to assess information and real choices in how they do so.
Trust is becoming increasingly scarce. As faith in our institutions diminishes, as AI-generated content floods our feeds, and as centralized platforms become more suspect, users will demand verifiability and transparency from their content. New systems will be built on cryptographic proof rather than institutional authority, allowing for the verification of content authenticity, the establishment of participant identity, and the development of a thriving ecosystem of third-party analysis and tooling to support our search for truth. The technology for this trustless future already exists; adoption will follow as it becomes necessary.