Six Glitchy Things
AI gets more political. ZK for AML. Nihilistic prediction market shenanigans.
This is the second edition of our news digest, which we will send regularly alongside our other work. Here’s the first edition, in case you missed it.
The speed of technological evolution is causing familiar systems—from government to finance to journalism—to glitch. The resulting noise makes it tough to connect the dots. We hope this digest helps.
The EU watered down “chat control.” Privacy advocates still hate it. European governments have agreed to advance the controversial Regulation to Prevent and Combat Child Sexual Abuse, known to its critics as “chat control.” The proposal originally mandated that messaging application providers—including encrypted messaging providers like Signal—automatically scan content before it leaves a sender’s device for evidence of child sexual abuse. Privacy advocates and cryptographers warn that this sort of “client-side” scanning system would be a dangerous backdoor that the government could easily abuse. Denmark, currently the President of the Council of the EU, recently modified the text after failing to garner enough support for mandatory scanning. Now the proposal calls for “voluntary” scanning instead—and is on track to be finalized in April.
Some privacy advocates called the new language a Trojan Horse. Patrick Breyer, a digital rights activist and former member of the European Parliament representing the German Pirate Party, told TechRadar the policy is a “disaster waiting to happen.” Said Breyer: “By cementing ‘voluntary’ mass scanning, they are legitimizing the warrantless, error-prone mass surveillance of millions of Europeans by US corporations.”
AI companies are “taking a cue” from the crypto industry on election spending. That’s according to the New York Times, which has an interesting recent report on a plan to raise $50 million for a “new network of super PACs that would back midterm candidates in both parties who prioritize AI regulations.” The effort, which apparently involves employees at Anthropic and “allied donors loosely tied to the effective altruism movement,” is explicitly a counter to Leading the Future, a group that has raised $100 million combined from Andreessen Horowitz and the family of OpenAI co-founder Greg Brockman.
AI industry money is almost guaranteed to pour into politics this cycle, just like crypto industry money did in 2024. But as the Times notes, unlike the crypto industry, the AI industry “is not rowing in one direction politically.” Whereas the Leading the Future side has been largely critical of AI safety regulation, Anthropic’s leaders are known for their public warnings about the possible dangers of AI. For example, Anthropic recently went to the Wall Street Journal with the information that Chinese hackers used its technology to “automate break-ins of major corporations and foreign governments during a September hacking campaign.”
Aztec’s zero-knowledge sanctions check shows what ZK can do for AML. Token sales require sanctions checks. Aztec’s recent token sale gave buyers the option to use ZKPassport, a service that uses zero-knowledge proofs to let people anonymously prove they were over 18, not on a sanctions list, and not from a list of countries banned from participating in the sale for various regulatory reasons. The application demonstrates how zero-knowledge proofs could be used to achieve anti-money laundering goals without collecting personal information from users. We talked about this all day long at the DC Privacy Summit in October.
There’s an important caveat, as Ariel Gabizon, chief scientist at Aztec, clarified on Twitter: The process records a hash of the user’s passport chip data on the blockchain, a “unique identifier needed to ensure one person doesn’t participate in the sale multiple times and purchase tokens beyond the regulatory allowed limit,” he said. That means it’s possible for anyone who also has your passport chip data, starting with the government that issued the credential, to check if the user participated in the token sale. As ZKPassport explains here, the team is working on a new approach that would prevent this.
Betting on the war in Ukraine got ugly. The perverse beauty of the term “degen” is that there’s no bottom to it. Anyway, people are using the prediction market Polymarket to wager on battles in Ukraine. In one specific market, gamblers had been betting on whether Russia would capture a city called Myrnohrad by the middle of November. According to a report from 404 Media, just before the clock ran out on that bet, an edit appeared on the market’s agreed-upon source, a map maintained by the Institute for the Study of War, a Washington, DC-based think tank, showing that Russia had captured the city. The Polymarket bet paid the supposed winners. But then the map was edited once more, and Russia no longer controlled the city, according to 404. “It has come to ISW’s attention that an unauthorized and unapproved edit to the interactive map of Russia’s invasion of Ukraine was made on the night of November 15-16, EST,” ISW said in a statement on its website. “The unauthorized edit was removed before the day’s normal workflow began on November 16,” it added.
When 404 asked ISW about the Polymarket betting, the group didn’t hold back: “ISW has become aware that some organizations and individuals are promoting betting on the course of the war in Ukraine and that ISW’s maps are being used to adjudicate that betting. ISW strongly disapproves of such activities and strenuously objects to the use of our maps for such purposes, for which we emphatically do not give consent.”
Crypto exchanges Binance and OKX appear to have helped transnational criminal hacking groups move millions in illicit funds. One of the main characters in this story is the Huione Group, a Cambodian financial conglomerate that also runs a shady digital marketplace likened to “Amazon for criminals.” In May, the US moved to ban the group from its financial system. “Huione Group serves as a critical node for laundering proceeds of cyber heists carried out by the Democratic People’s Republic of Korea, and for transnational criminal organizations in Southeast Asia perpetrating convertible virtual currency investment scams, commonly known as ‘pig butchering’ scams, as well as other types of CVC-related scams,” the US Treasury Department’s Financial Crimes Enforcement Network said at the time. Nonetheless, hundreds of millions of dollars flowed from Huione to crypto exchanges Binance and OKX during the first half of this year— including after the US had branded it a criminal organization, according to a new investigation from the International Consortium of Investigative Journalists.
Binance also seems to have allowed North Korean hackers to use its service in the aftermath of the $1.5 billion hack of another crypto exchange, Bybit, in February. A crypto analytics firm called ChainArgos told the New York Times that a handful of Binance accounts received $900 million in Ether from the same service the hackers were using to swap stolen Ether for Bitcoin. The stolen money was “the only conceivable source for these outflows,” ChainArgos CEO Jonathan Reiter told the Times. “Even a bad–maybe even defective—screening tool would spot that.”
AI will change voters’ minds. It doesn’t have to be a disaster for democracy. Research published last week in Nature and Science outlined how AI chatbots programmed to persuade people in the US, Canada, Poland, and the UK to vote for specific political candidates were far more effective than standard political advertising. On its own, this shouldn’t be too surprising; an “I’m so-and-so and I approve this message” ad stumping for a candidate is a lot easier to dismiss than a chatbot that answers questions with a coherent-sounding argument. But one interesting wrinkle of the research was that it showed the bots became more persuasive when they served up more misinformation.
A lot of coverage around the research assessed whether it proved that chatbots were destined to hasten the decline of democracy by way of rotting voters’ brains. The answer to that is: yeah, probably. But it’s not a foregone conclusion. There are plenty of timelines still available to us in which governments step in and pass laws that restrict the use of AI during elections. To some extent it’s already happening—as MIT Technology Review reports, the EU’s AI Act classifies AI tools of political persuasion as “high risk” and curtails their use. Unfortunately in the US, where the federal government has been hostile to AI regulation, election watchdogs are forced to operate using outdated laws about what constitutes election “fraud”. There’s still time to get a handle on the situation, though that will require meaningful regulatory action.
Follow us on Twitter and Bluesky—or get corporate with us on LinkedIn.

