How to prevent online privacy from burning to the ground
We can’t stop the inevitable dystopia. We may be able to contain it.
The year is almost completely in the rearview, but one 2025 event I can’t stop thinking about is the 2nd-ever DC Privacy Summit in October. (In case you missed the event, here are all the videos, and here’s a Glitch newsletter that summarizes each session.) I tried to unravel all of my thoughts below. But in short: Technology is changing the world quickly, in contradictory and disorienting ways. The DC Privacy Summit was about focus, communication, and clarity of purpose. It was about protecting people from what’s coming. —Mike Orcutt
We probably can’t stop the privacy disaster that’s coming. But we may be able to contain it.
By Mike Orcutt
We’ve hit a fork in the road. One way leads to dystopia, panopticon, an AI-powered surveillance state that can all but read your thoughts. In the other direction lies a treacherous technical and legal obstacle course that, if successfully navigated, results in reliable, safe, and easy-to-use cryptographic tools that can protect people from state and corporate surveillance.
Whether and to what extent that protective infrastructure gets built will hinge on communication and shared purpose among software developers, law enforcement, and policymakers—groups that don’t always get along.
The clock is ticking. Age verification laws, digitized ID credentials, and stablecoins are coming. As a society, we have a choice to make: Will these new digital facets of our lives be privacy-preserving, or privacy nightmares?
This was the central question that we wrestled with in October at the second annual DC Privacy Summit. It’s taken me a little while to go back over all the thought-provoking content and really hear how that deeper message came through. Here are a few reflections from the event that I think are worth keeping in mind as we make our way through this pivotal technological moment.
Mountains of dry timber, ready to burn
“Things are going to get really, really bad, and they are going to do this in a hurry,” Johns Hopkins computer science professor Matthew Green warned during his keynote talk. Green, who is renowned for his work in cryptography and his insights on data privacy, calls what’s about to happen a privacy forest fire.
A forest fire needs a few ingredients, starting with fuel. “If the fuel is already dry and just sitting there, that’s even better,” Green said. In his analogy, the dry fuel is the vast amount of data that technology companies and governments collect and store in centralized databases.
Next comes the accelerant, which will spread the fire like a strong wind. Green said this has two components. The first is weakening encryption.
In the European Union, some policymakers are pushing a policy that has become known as “chat control.” Officially called the Regulation to Prevent and Combat Child Sexual Abuse, the proposal originally mandated that messaging application providers, including encrypted messaging providers like Signal, automatically scan content—before it leaves a sender’s device—for evidence of child sexual abuse. Privacy advocates and cryptographers, including Green, warn that this sort of “client-side” scanning system would be a dangerous back door that the government could easily abuse. Denmark, currently the President of the Council of the EU, recently modified the text of the regulation after failing to garner enough support for mandatory scanning. The proposal now calls for “voluntary” scanning instead. It’s on track to be finalized in April.
Another example is in the United Kingdom, where earlier this year the government ordered Apple to create a back door that would provide access to any encrypted user data stored in the cloud, globally. After Apple stopped offering its encrypted cloud service, called Advanced Data Protection, in the UK, and the Trump administration criticized the policy, the government dropped its demand for global access. But in October, it issued a new, more focused demand for access to British users’ encrypted data.
And then there’s the war against private cryptocurrency. The EU is cracking down on privacy coins like Zcash, and the US has prosecuted developers of crypto privacy tools.
The second component of the accelerant in Green’s analogy is more subtle: “In the future, everything we do online is going to be tightly bound to our identity,” he said. This shift is already underway, thanks to laws cropping up that mandate many kinds of websites verify that their users are over a certain age. “At some point in the near future, that tightly bound human identity on your phone is going to be used for stuff you do online,” Green warned. “And that human ID is a legible government ID,” meaning your government could theoretically see everything you’ve done online.
The final forest fire ingredient is powerful AI. “In the past, we have survived all of this data collection because we’ve had limited human capacity to process data,” Green said. “That’s not a problem anymore.”
The forest for the trees
You may prefer a different disaster metaphor. But the notion that we need new ways to protect ourselves from massive, AI-assisted surveillance should not be controversial. Unfortunately, this conversation is stuck at an impasse. It’s plausible that we could deploy some of the astounding practical cryptographic capabilities that have emerged in the last half-decade toward that protection. But these technologies have developed a bad reputation with many policy and law enforcement types right out of the gate.
Take zero-knowledge cryptography, which makes it possible to prove statements about yourself, like your national citizenship or that you are over 18, without revealing other information. Unfortunately, the most attention it has received outside of a relatively small community of cryptographers and enthusiasts was thanks to a massive cryptocurrency heist carried out by a band of North Korean state-sponsored hackers raising funds for the nation’s nuclear weapons program.
The Lazarus Group, as it’s called, stole $600 million in 2022, then turned to Tornado Cash, a zero-knowledge cryptography-based privacy tool on the Ethereum blockchain, to throw law enforcement off its tracks. That has led to the criminal conviction of two of Tornado Cash’s developers, Alexey Pertsev and Roman Storm.
The episode, combined with the technical complexity at play, has made it easy for regular people to miss that the underlying technology could be used as protection against the overcollection of personal data. If you need evidence, look no further than Google Wallet. Google has incorporated zero-knowledge proofs to create anonymous credentials, which can be used to prove the veracity of information on a driver’s license or passport, like someone’s age, gender, or nationality, without revealing any other personal data.
Abhi Shelat, a Northeastern University computer science professor who helped develop Google’s zero-knowledge credentials, argued at the DC Privacy Summit that zero-knowledge cryptography is ready for mainstream adoption. “It works on a phone; it works on a blockchain,” he said. “I wouldn’t say there’s a real technical bottleneck to deploying this.”

Few understand this
Instead, Shelat said, the major bottleneck to deployment may be how few people understand the technology well enough to reckon with the novel questions it raises.
For example, law enforcement agencies are accustomed to collecting and storing detailed data about financial transactions. If advanced cryptographic tools are used to keep that information secret, how will those agencies do their job? On the other hand, could such tools be designed to make it easier to prevent financial crime?
More specifically, could the crypto industry find better ways to stop the Lazarus Group from using cryptocurrency to help fund its nuclear weapons program? “Onchain privacy is like nuclear physics,” Wei Dai, a cryptographer and research partner at the venture capital firm 1kx, argued at the Privacy Summit. “It is a dual-use technology that can do great good for the world, but also can be very dangerous.”
Neha Narula, director of MIT’s Digital Currency Initiative, said that while zero-knowledge proofs are “uniquely powerful,” it’s important not to gloss over the potential downsides if they and other powerful privacy tools gain wide adoption.
“Imagine a world where producing cryptographic proofs is easy and automatic, and so asking for proofs becomes routine,” she said. “You need to prove you are credit-worthy to use a ridesharing app, or you need to prove your health status in order to enter a building.” People end up with less autonomy, not more. “A technology that was originally designed to help with privacy turns into continuous permissioning,” she imagined.
A chance to show the world
In the near term, given the national security concern, the lack of clarity as to how these tools fit into the law, and the technology’s rapid evolution, the legal conflict over these technologies seems likely to keep festering.
On the other hand, recent policy developments—age verification laws, government-backed digital IDs, and America’s new stablecoin law, to be specific—may force the issue in a way that makes a resolution more plausible.
The industry should jump at the chance to demonstrate its powerful cryptographic technologies, Peter Van Valkenburgh, executive director of the blockchain policy advocacy and research group Coin Center, argued at the Privacy Summit.
In the US, the Guiding and Establishing National Innovation for US Stablecoins (GENIUS) Act has cleared the way for more traditional financial institutions to start using stablecoins. That may be a business opportunity for crypto companies, but it also raises urgent questions about privacy. “The current stablecoin model of recording all user payment transactions on a public blockchain is actually worse for personal privacy than the traditional banking system,” he said. It’s in this context that Coin Center is advocating for “a fundamental rethink of digital identity in the United States.”
America’s current anti-money laundering (AML) regime operates under a law called the Bank Secrecy Act, which was passed in 1970. The antiquated system is costly, blocks only a small percentage of criminal funds, and gives the government the power to “weaponize” our payments data as a means of control, Van Valkenburgh said at the Privacy Summit. Blockchains, along with portable digital ID credentials, zero-knowledge proofs, and cryptographic tools, can be used to build alternative, more effective, and much less invasive AML systems, he said.
There’s a lot that still needs to be figured out, though. For one thing, proving individual, static statements about yourself is a long way from replacing traditional AML systems. “None of the history of computer attack and defense or anti-money laundering is a static system,” Ian Miers, another Zcash co-inventor and a computer science professor at the University of Maryland, said later in the day. “You have to be able to react and adapt because any given tactic you pick, they’re going to adapt their tactics, techniques, and procedures to get around it.” Miers argued that what’s needed are systems capable of automatically determining “dynamic risk scores” for potential users while still maintaining user privacy.
Miers and Van Valkenburgh recently authored a paper calling on crypto privacy projects to coordinate amongst themselves and with civil liberties advocates to design and test safe, alternative AML systems. To start, an industry consortium “could develop open standards for decentralized, maximally privacy-preserving identity credential architecture and urge Congress and regulators to authorize regulated providers to rely on them,” they wrote.
A race against time
Something like that would have implications beyond crypto and anti-money laundering, particularly if governments keep pursuing “identity-binding mandates,” as Green calls them.
We should resist policies that tie government identities to online activity, Green said. “At very least, we have to make sure that any identity-binding mandate we add to everything is done in a privacy-preserving way.” That will tee up another fight, he said: “Is it fully privacy-preserving or is it privacy-preserving with a warrant exception? We haven’t even begun that discussion, which is terrifying, because these laws are being passed right now.”
We should also ban companies from retaining the data they used to verify identity credentials, Green argued. Otherwise, a market will arise for it. “It’s not privacy-preserving, they will log it, and they will log your identity with every single thing you do, and that data’s going to be incredibly valuable.”
Finally, while anonymous credentials work, they create new challenges when it comes to the sort of fraud and abuse that commonly occurs online. “I would love to tell you that all we have to do is replace cookies with zero-knowledge proofs and we’re done,” Green said. “But what happens when someone takes my key or my driver’s license credentials off my phone and copies them onto 10,000 phones that are all bots? Which is what they’re gonna do.”
Today, if Google sees the same cookie coming in from too many IP addresses, it can tell you’ve been hacked. “If I’m anonymous—if I’m a zero-knowledge proof—they can’t do that,” he said. He called this a “huge technical barrier” to the real-world use of anonymous credentials. “I’d love to tell you that the academic world has fixed this. Nobody’s fixed this.”
Follow us on Twitter and/or get corporate with us on LinkedIn.



