Sorry, but we have to talk about Moltbook
Also: Is Ethereum too institutional for a "cypherpunk pivot"?
Greetings! We’re back with another list of Glitchy things. Here’s a link to the previous digest in case you missed it. And check out our new events page!
The speed of technological evolution is causing familiar systems—from government to finance to journalism—to glitch. The resulting noise makes it tough to connect the dots. We hope this digest helps.
Practical AI agents are arriving, bringing new security and privacy risks with them.
A new open source agent called OpenClaw (previously “Clawdbot” and “Moltbot” for a brief while) turned heads among AI enthusiasts last month for its ability to perform a range of personal assistant-type tasks. It also appears to be a concrete example of the novel threats that AI agents pose to people’s sensitive information.
In a multipart post on X, hacker and cybersecurity entrepreneur Jamieson O’Reilly described how he analyzed OpenClaw and found it to be leaving the door wide open for adversaries to connect to it and extract a user’s sensitive information (the OpenClaw team has since released multiple security updates and says “Security remains our top priority”).
This is something privacy experts have been banging on about for a while now. Meredith Whitaker, President of the Signal Foundation, has taken to describing agents as “breaking the blood-brain barrier” of personal computing because of the level of access they require in order to be effective. She and others have pointed to Microsoft Recall, an agentic system built into Windows Copilot Plus-equipped laptops. Researchers at Signal discovered last year that Recall learns a user’s behavior by taking screenshots of everything someone does on their computer (that included looking over users’ shoulders and viewing their unencrypted Signal messages).
As Whitaker, O’Reilly, and others point out, therein lies the central tension of our agentic future. An agent’s ability to do stuff is heavily dependent on two things: the data it has access to and the permission it has to take actions on our behalf. OpenClaw, for example, has pervasive access to apps and data throughout the computer it’s running on. This presents such a large risk to sensitive data that AI experts routinely warn people not to install OpenClaw on their own. (Peter Steinberger, OpenClaw’s inventor, has been vocal on social media about its security risks as well as its lack of polish as a product).
And oh yeah, Moltbook.
As OpenClaw fever was in full force around the AI world, Moltbook, a social media platform for AI agents, was born last week. Stratospheric hype quickly followed—the influential AI researcher Andrej Karpathy declared agent behavior on the forum to be “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People’s Clawdbots … are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.”
In just a few days, some 1.5 million agents had registered on the site. Headlines blared about agents kibitzing in Moltbot forums, scheming to create their own language, their own religion.
The agents may yet organize into a society that ends human dominance on Earth. For the moment, though, the initial wave of hype has given way to a more sober recognition that when hastily-built AIs are set loose on people’s computers, and on the internet, things will get messy. Karpathy has also changed his tune. In addition to calling Moltbook “a dumpster fire” at the moment, he said it’s likely that large groups of agents interacting at scale will lead to unintended consequences. “We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc” he wrote. “It’s very hard to tell, the experiment is running live.”
How to fight malicious AI swarms spreading lies.
Artificial hive minds that can map online social structures, infiltrate target communities with tailored behavior that appears human, and “self-optimize” in real-time are coming, and they pose a grave threat to democracy. That’s according to the 22 academics, researchers, and policy-minded experts behind a recent essay in the journal Science that grimly warns of “a new frontier in information warfare.”
Bots designed to sow disinformation have been around for years. But the capabilities of LLM-powered agent swarms represent a quantum leap, the essay contends. And today’s fragmented information environment, full of disparate ideological echo chambers, makes many online communities ripe for this sort of manipulation. “AI swarms are distinctly equipped to exploit this by engineering a synthetic consensus,” the authors argue. These info weapons will be capable of driving “norm shifts” and even driving deeper culture changes, like “subtly altering a community’s language, symbols and identity,” the authors predict.
There are some concrete actions we can take to prevent this from happening, the essay argues. “Platforms and regulators could require continuous, real-time monitoring detectors that scan live traffic for statistically anonymous coordination patterns,” and protect their users with “AI shields” that could “label posts that carry high swarm-likelihood scores, let users down-rank or hide them, and surface short provenance explanations in situ.”
“Provenance” is an important word in that sentence, and throughout the paper. Let’s talk about it for a second. Merriam-Webster’s first definition: ORIGIN, SOURCE. “The history of ownership of a valued object or work of art or literature” is the second definition. Where did a fact, story, article, or social media post originate? Who or what was the source? Who or what has taken ownership of them since then? Tracking and verifying the ownership of valuable things is exactly what crypto does. Hence, the Glitchiest paragraph in the essay:
The adaptive nature of AI swarms underscores the need for a complementary approach: strengthening provenance. Stronger provenance may reinforce the reliability of identity signals without muting speech. Policy-makers may incentivize the rapid adoption of passkeys, cryptographic attestations, and federated reputation standards, backed by antispoofing research and development. However, “proof-of-human” is no panacea: Millions of people online lack identification, biometrics raise privacy risks, and verified accounts can be hijacked. Real-identity policies may deter bots yet endanger political dissidents, activists, and whistleblowers who rely on anonymity to speak safely. Nevertheless, provenance strengthening is among the most promising ways to raise the cost of mass manipulation. Safeguards could allow verified-yet-anonymous posting, periodic reverification to curb hijacking, and symbolic subscription fees to deter botnets. Cryptographic tools can further protect privacy while preserving accountability.
Speaking of which, the aftermath of recent events in Minnesota shows how technological advances are distorting reality and eroding trust.
Don’t worry, though, the New York Times is on it. In all seriousness, things have gotten incredibly bleak, incredibly quickly. A Times analysis suggests that the reaction online to the recent killing of Alex Pretti by federal agents in Minneapolis shows that we may have passed a point of no return. The zone is so flooded with fake images and other content—including from the White House—that it has become difficult for many people, including members of Congress and federal government officials, to determine what’s real. “In moments past, we thought that this online fever would break, and now it is a systemic feature rather than a bug,” Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, told the Times.
Proof-of-personhood may be required to access OpenAI’s prospective social network.
Sources familiar with the project told Forbes the small team behind it “has considered requiring users to provide ‘proof of personhood’ via Apple’s FaceID or the World Orb.” The latter would make sense, given that OpenAI Sam Altman founded and chairs Tools for Humanity, the company behind the Orb. Tools for Humanity’s technology, which takes advantage of zero-knowledge cryptography, generates a unique digital ID corresponding to every pair of human irises that its camera scans. Then again, there’s a chance none of this happens. “There is currently no launch timeline for OpenAI’s social network and it could change dramatically before it’s ready to show publicly,” the sources told Forbes. The Verge first reported on the social network project in April 2025.
Is Ethereum too institutional for a “cypherpunk pivot”?
Vitalik Buterin has had enough of Ethereum’s “backsliding,” and he’s got a New Year’s resolution: “2026 is the year we take back lost ground in terms of self-sovereignty and trustlessness,” he declared in one of his many recent long tweets. Over the past decade, Ethereum has compromised its values in several areas “in the name of mainstream adoption,” he said. In his view, it has become too reliant on centralized infrastructure to maintain the chain and run decentralized applications. “We are making that compromise no longer.” What’s different now, he argued, is that tools are available that make it easier for people to participate in and use the network while maintaining the security and privacy of their personal data.
dasha, an anon with a Milady profile pic and more than 32,000 followers, is skeptical. “ethereum is too institutional by now for any sort of real cypherpunk pivot,” they tweeted. Stablecoins and DeFi applications on the network will not become “less compliant,” dasha said. “the kyc tightening will continue.” Why are we quoting dasha? Because dasha’s words apparently hit home for Buterin. He hit back with a quote tweet (another long one) and a counterargument he often deploys: It’s not so simple! “The relationship between ‘institutions’ and ‘cypherpunk’ is complex and needs to be understood properly,” he said.
“In truth, institutions (both governments and corporations) are neither guaranteed friend nor foe.” For example, he said, the US has the Patriot Act, but its government is “now famously a user of Signal.” Buterin then predicted that in “this next era,” although governments will push for more KYC, privacy tools will keep improving thanks to cypherpunks, and institutions will likely adopt some of those tools because they “will want to control their own (stablecoin) wallets,” for example. So institutions aren’t necessarily opposed to the cypherpunk vision of achieving privacy and autonomy via technology. Just don’t expect them to be altruistic. “Of course, they will not proactively work to give you the user a self-sovereign wallet,” he added. “Doing _that_ in a way that is secure for regular users is the task of Ethereum cypherpunks.” 🤔
(In case you missed it, Lucy Harley-McKeown recently asked: What in the world is a neo-cyperpunk?)
Follow us on Twitter and Bluesky—or get corporate with us on LinkedIn.

