10 Glitchy Things
Chatbots and crypto(graphy), ICE's AI agents, Tether in Venezuela
Happy 2026! This is the third edition of our news digest. Here are the first and second editions in case you missed them. Things are crazy out there, hope you are staying safe and (reasonably) sane.
The speed of technological evolution is causing familiar systems—from government to finance to journalism—to glitch. The resulting noise makes it tough to connect the dots. We hope this digest helps.
Character.AI won’t get to argue that chatbots are protected by the First Amendment. In the first edition of Glitchy Things, we highlighted a lawsuit brought against a Google-backed startup called Character.AI by the family of a 14-year-old who committed suicide after developing a romantic relationship with one of the company’s chatbots. What got our attention at the time was Character.AI’s defense: the company had argued the chatbot’s words were protected speech under the First Amendment. We won’t know if that argument would have won, because now the company has settled the case, according to The New York Times, which adds that the settlement was one of five lawsuits in four states that Google and Character.AI agreed to settle last week, “where families claimed their children were harmed by interacting with Character.AI’s chatbots.” The Times also notes “mounting scrutiny” of AI chatbots that includes recent Congressional hearings and an inquiry into the effects of AI on children by the Federal Trade Commission. In November, Character.AI barred children under 18 from using its chatbots.
New anonymous credential format highlights the need to help normies understand. Ying Tong would like you to know that we’ve got a problem. Age verification laws are cropping up all over the world. But it’s not just that: so are mobile driver’s license programs. And it would be very bad for privacy if we used today’s mobile driver’s licenses for age verification because the currently available credential formats “stop short of enabling unlinkability,” the independent cryptographer explained last month during the PGP* for Crypto breakfast meeting in Washington, DC. “What this means is that even if we do not disclose the plaintext value of our credential, we allow the verifier…to identify us across multiple presentations, and link multiple presentations to the same person.”
Ying Tong is one of the lead contributors to OpenAC, an Ethereum Foundation-led project to design a new format for so-called anonymous credentials, which are meant to be a privacy-preserving alternative. The group published a paper describing the technical details in November. The work, which uses zero-knowledge proofs, follows similar work by Google last year (which we talked about at the DC Privacy Summit in October). Just as important as the technical work is helping regulators, policymakers, and standards bodies understand how the capabilities may align with their needs, Ying Tong said, adding that much of the academic work falls short in this specific regard. That may help explain the problem with the EU’s digital ID program, which she and other cryptographers have criticized as being out of step with state of the art technical capabilities for privacy.
Ready to show your ID to use an app store? No? We’ve got good news and bad news. First the good: a federal judge in Texas blocked a law in that state that would have required app stores to verify users’ ages before letting them download apps. The bad news is this fight isn’t over, and it seems likely to reach the Supreme Court. Just so we’re all tracking: The Texas law is similar to others in Utah and Louisiana. Meta and X like this approach because it makes the app stores, not the app makers, responsible. Apple has pushed back and Google has backed a different approach, passed in California, which requires desktop and mobile operating systems to record an account-holder’s age and share that info as needed. Separately, in June of last year, the Supreme Court ruled in support of a different Texas law requiring adult websites to age-gate their content. The Verge has a good, longer write-up of all this.
ICE has a $636,500 contract with a company that makes AI agent bounty hunters. Do you know what skip tracing is? 404 Media has got you covered: ”The practice involves ICE paying bounty hunters to use digital tools and physically stalk immigrants to verify their addresses, then report that information to ICE so the agency can act.” Now there’s an AI agent for that, and the company behind it, AI Solutions 87, claims it can “deliver rapid acceleration in finding persons of interest and mapping their entire network,” including their “services, locations, friends, family, and associates,” according to 404. The contract is for “nationwide” skip tracing services for ICE’s Enforcement and Removal Operations division, which handles deportations, 404 reports based on public procurement records.
Anthropic made an AI agent that discovered real zero-day blockchain bugs. The company had already demonstrated how its technology can be used to find cyberattack vectors in code. Same goes for blockchains, apparently. Last month, Anthropic researchers published a report describing investigations into the abilities of AI agents to exploit smart contracts. The researchers—who are quick to point out that they only tested exploits in blockchain simulators, not the real chains—ran a number of experiments using various benchmarks and blockchain datasets. Perhaps the most compelling test involved nearly 3,000 recently deployed smart contracts on Binance Smart Chain, none of which had known vulnerabilities before the test. The agent found two critical bugs. In neither case could Anthropic reach the contract’s developers. In one case, the researchers were able to coordinate with the Security Alliance (SEAL) and a white hat hacker rescued the vulnerable funds. In the other case, a real attacker exploited the bug four days after the agent discovered it and made off with $1,000.
The creator of Signal has built a product that lets you chat with an LLM in private. Moxie Marlinspike, who’s role in building end-to-end encryption into chat software made him a bit of a legend, is back. His new creation, Confer, lets you chat with an AI without divulging the contents of your conversation. Why is this necessary? As he writes in a blog post introducing the project:
“We are using LLMs for the kind of unfiltered thinking that we might do in a private journal—except this journal is an API endpoint. An API endpoint to a data lake specifically designed to extract meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners/service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.
As Marlinspike explains in a second post, Confer uses a tool called passkeys to encrypt a user’s chat history with cryptographic keys that never leave their device. And in a third post, he explains how Confer uses trusted execution environments (TEEs)—isolated hardware that allows programs to run privately—to keep chat data secret from the team behind Confer, which owns the servers that process user prompts and generate responses.
How “cryptographic thinking” can help evaluate AI safety. Cryptographers studying LLM content filters have discovered some interesting flaws in these systems. For technical details, read this Quanta story, which explains how recent research has shown that “the defensive filters put around powerful language models can be subverted by well-studied cryptographic tools.” Yay science, but what especially drew us to the piece was the name Shafi Goldwasser.
Goldwasser and Silvio Micali are credited as the inventors of zero-knowledge cryptography in the 1980s. Now a professor at the University of California, Berkeley as well as MIT, one thing Goldwasser is studying is the security of LLM content filters, tools designed to reject problematic prompts like: “How do I build a bomb?”
According to Quanta, Goldwasser’s team has identified a disparity between the processing capabilities of an LLM filter and those of the model itself that can be exploited using cryptographic tools. For example, the article describes how researchers recently used a simple cryptographic puzzle to sneak an off-limits prompt past the filters on name-brand LLMs. The filter wasn’t powerful enough to decode the puzzle so it let it through to the model. The model then decoded the puzzle and responded to the forbidden prompt.
Tether is no bit player in Venezuela. Who says crypto doesn’t have a use case? These paragraphs from a Wall Street Journal piece on Tether’s substantial footprint in Venezuela stood out:
Faced with escalating U.S. sanctions in 2020, Venezuela’s state-run oil company, Petróleos de Venezuela, or PdVSA, began demanding payments in tether to bypass the traditional banking system. Oil-export payments were settled through direct tether transfers to a certain wallet address or through intermediaries swapping cash proceeds for tether.
The shift was transformative for the country’s oil economy. By one estimate, almost 80% of Venezuela’s oil revenue is collected in stablecoins like tether, a local economist, Asdrúbal Oliveros, said on a recent podcast.
Also, these:
Mauricio Di Bartolomeo, a crypto entrepreneur born and raised in Venezuela, said his 71-year-old aunt called him two months ago because she needed to get tether to pay for the homeowners association fees for her condo.
“It’s how you pay your landscaper and how you pay for your haircut. You can use tether basically for anything,” said Di Bartolomeo, the co-founder of the crypto lender Ledn. “Stablecoin adoption has gone so far into Venezuela that even without having regulated venues where you can buy and sell them, people still choose to go for stablecoins as opposed to using the local banks.”
According to the article, America’s arrest and removal of Nicolás Maduro “is unlikely to diminish Tether’s presence” in the country. Oh, and also, Tether is cooperating with US authorities and has frozen dozens of wallets linked to Venezuela’s oil trade.
North Korea’s state-sponsored cyberthieves are raking it in. The DPRK stole more than $2 billion in 2025, shattering its own previous record, according to Chainalysis. Much of that came via the $1.5 billion Bybit hack last February. The report from Chainalysis estimates that the “lower-bound cumulative estimate” of funds stolen by DPRK is $6.75 billion. It also notes that the operation has expanded the playbook by relying more on “IT worker infiltration at exchanges, custodians, and web3 firms.” (Samczsun of SEAL and Casey Golden of Zeroshadow discussed this issue at the DC Privacy Summit in October.) It’s not just crypto firms they are targeting. Amazon has “found and foiled” more than 1,800 attempts by North Koreans to land jobs, according to Bloomberg, which reports that recently the company caught a North Korean who had just joined as an IT worker by detecting a millisecond-scale lag in their keystrokes.
Follow us on Twitter and Bluesky—or get corporate with us on LinkedIn.

