Why a billionaire wants to decentralize TikTok
Project Liberty’s gambit, and why Perplexity is … perplexing
Happy Wednesday! This Glitch gets you the story behind a “people’s bid” to buy TikTok (to save the internet, of course), observations about an AI startup that seems to be stomping on old-school internet rules, and of course our signature spread of odds and ends.
PS: Glitch will take a break next week, and we’ll see you again in two weeks. Happy Summer!
Project Liberty wants to remake the internet’s architecture, beginning with TikTok (somehow).
Big tech companies have done more than just monopolize the internet, Frank McCourt argues—they’ve subjugated us for profit, and in the process denied us our “digital personhood.” The only way to reclaim our humanity online, he says, is to migrate to an internet that runs on a new technical infrastructure. That’s why the billionaire real estate developer and former owner of the LA Dodgers is now putting together a proposal to buy … TikTok?
Let’s back up. In 2021, McCourt launched an initiative called Project Liberty, which at the time billed itself as a $100 million effort “to enable a more equitable technology infrastructure for the internet.” According to a recent press release, the project, which includes a nonprofit called the Project Liberty Foundation and a technology company called Project Liberty Labs, is now “supported by a $500 million commitment by McCourt.” But until recently, it has kept a relatively low profile.
That changed in March when McCourt published a book articulating Project Liberty’s case. Called Our Biggest Fight, he wrote it with Michael Casey, a former Wall Street Journal reporter and chief content officer at CoinDesk. Then last month Project Liberty announced that McCourt was building a consortium to make what it called a “people’s bid” to acquire TikTok.
In April, President Biden signed a bill giving ByteDance, TikTok’s Chinese parent company, 270 days to sell it—or else it will be banned from US app stores. Last month TikTok sued the government, calling the law unconstitutional. Even if it were to sell, Chinese export laws seem likely to prevent ByteDance from including TikTok’s powerful proprietary algorithm in the deal.
McCourt isn’t interested in TikTok’s algorithm, though. His goal is to transport TikTok’s 170 million US users to Project Liberty’s version of the digital promised land.
“A new information model”
Tech companies are using algorithms like TikTok’s to “dehumanize” us, McCourt and Casey write in Our Biggest Fight. “With hidden, proprietary, and self-updating algorithms that are constantly learning from our data, they curate the torrent of social media that has become the primary source of information for billions of people,” they write. “In doing so, they’ve learned to tap into our most basic instincts to engender the conditions that maximize our engagement with their online platforms, and, by extension, their profits.”
McCourt and Casey make the case that this profit motive has done grave damage to democracy. If left unchecked things will just get worse, particularly as generative AI becomes more pervasive. It can’t be turned back through policy alone, they argue: the internet needs a new technical infrastructure designed to give users control over their digital identities and data.
Project Liberty is doing more than talking about this. Its tech company, Project Liberty Labs, has developed open-source software called the Decentralized Social Networking Protocol or DSNP. The team has implemented DSNP on a custom blockchain called Frequency, which is part of a network of interoperable chains called Polkadot.
“DSNP leverages a blockchain for the very precise, foundational implementation of the social graph and public message routing (a message is a general term for any public interaction on social media—a post, video, picture, comment, reaction, etc.),” Braxton Woodham, co-creator of DSNP and president of Project Liberty Labs, wrote in a post explaining the protocol.
Like other decentralized social media protocols that have emerged recently, including Ethereum-based Farcaster and the AT Protocol underlying Bluesky (which does not use a blockchain), DSNP has its own way of using cryptography to manage each user’s identity and keep track of their social graph. The posts, photos, and videos they share are stored on centralized servers run by DSNP-based applications.
Also like its peers, DSNP is designed to let its users easily transport this data, as well as their content, from one application to any other application running the protocol. “DSNP-based applications are expected to be extremely clear about how a person’s data is managed and allow people some meaningful control over their data,” an FAQ reads. “People may choose to have public relationships and content which everyone knows about, or private relationships and content that is known only to those the person chooses.” Users who want to leave a service can also tell the service to stop displaying their content.
Which brings us back to TikTok. Project Liberty isn’t interested in TikTok’s proprietary algorithm because it wants to “rearchitect” the platform based on DSNP. It’s not just about giving users more control over their data, but doing so in a way breaks the cycle of algorithmic-driven outrage and division, Project Liberty president Tomicah Tillemann said at the Consensus crypto conference in Austin last month. The goal is to create “a different set of incentives to encourage better interactions than we are currently getting on a lot of our digital platforms,” he said during a panel discussion on “digital rights.” Tillemann didn’t elaborate on exactly how those new incentives would be created, however.
McCourt and Casey said something similar in their book: “We urgently need a new information model that surfaces our empathetic instincts and celebrates peoples’ prosocial behavior.”
Maybe they’re right. It seems like a stretch, though, that this is simply a matter of giving users more control over their data. Then what? What else awaits users in this digital promised land? If Project Liberty lands TikTok, its main challenge (besides all that rearchitecting) will be keeping those 170 million users around. Addictive algorithms work great for that. —Mike Orcutt
The Perplexity controversy is another reminder that the internet’s good old days are over.
With backing from Jeff Bezos, Balaji Srinivasan, Databricks, Nvidia, and many others that has pushed its valuation close to $3 billion, AI-powered search company Perplexity is positioning its search product as a competitor to Google, capable of ushering in a new era of internet search. But things have gotten bumpy lately.
Earlier this month, editors at Forbes were livid when Perplexity appeared to blatantly rip off a big investigation they’d just published. When confronted about it on Twitter/X, Perplexity’s CEO Aravind Srinivas essentially said “thanks for the feedback,” to which a Forbes editor basically responded, “uh no, what I was saying was, you’re stealing from us.” Forbes’s lawyers sent Perplexity a letter threatening legal action if the company doesn’t change its ways.
So far, so “internet in 2024”: AI companies doing what they want, when they want, with whoever’s data they can scrape. Things got a bit more interesting last week when Wired, perhaps spurred by the Forbes dustup, decided to see if Perplexity was reproducing its articles that should’ve been off-limits to the company’s crawlers.
Surprise! Wired found that Perplexity was almost certainly ignoring the publisher’s instructions and surreptitiously accessing articles, which it would then use to build answers to queries from its users. Arguably even worse, Wired’s investigation also found evidence that Perplexity would sometimes fail to crawl relevant websites for information, and deliver wildly fabricated results.
Perplexity is building its brand based on delivering accurate, relevant information to users, so delivering AI-powered lies and fantasies seems bad. (When Wired reached out to Perplexity for comment, Srinivas said “The questions from Wired reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.”)
But beyond that, the company’s actions highlight a weakness in the fabric of the internet that is increasingly being exploited: robots.txt.
As the name suggests, robots.txt is a text file that most websites create as a way to notify automated web crawlers what parts of their sites are ok to crawl, and which are off-limits. Publishers like Wired, the New York Times, BBC, and many others have used it to block AI companies’ crawlers from accessing their articles. It’s the primary part of a voluntary standard known as the Robots Exclusion Protocol adopted in the early days of the internet. There are no legal or technical impediments to ignoring the protocol, or what’s in a site’s robots.txt file. But for thirty years, the policy has mostly worked.
The rise of generative AI is changing that (for more on this, check out David Pierce’s excellent story earlier this year in the Verge). OpenAI has made a point of saying its GPTBot abides by robots.txt… but it only started saying that publicly after it had hoovered up vast amounts of the internet to train its large language models. One of the key observations from Wired’s investigation was that Perplexity says it abides by robots.txt but also used unverified IP addresses to crawl Wired’s articles.
Again, the Robots Exclusion Protocol is voluntary; it’s not enforceable. It’s from a simpler time when developers trusted one another to be good citizens of the internet. Now it is increasingly looking like a vulnerability that some AI companies are ok with exploiting.
It’s part of a central conversation for the age of generative AI. As more and more huge AI models come online that gather original content, train on it, and pass off their regurgitations as “new content,” how do we protect the rights of people who create the originals?
What Perplexity and its ilk make clear is that old ways of doing things that are based on trust and honor systems are unlikely to hold up. Firmer boundaries are needed, and that may require new technological tools that can shift the balance of power back into the hands of creators. Either so that AI doesn’t consume every original idea anyone’s ever posted online—or so that if it does, at least the people who came up with those ideas first can benefit from them. —Michael Reilly
ODDS/ENDS
Trading firm Jump Crypto donated $10 million to pro-crypto Super PAC Fairshake, bringing its total election-year war chest to $169 million. If crypto really is winning, the PAC money is a big reason.
Cameron and Tyler Winklevoss each donated $1 million in bitcoin to Donald Trump. The twins announced this via separate posts Twitter/X, each citing the Biden Administration’s supposed “war on crypto.”
Meanwhile, Trump is “in talks” to speak at a big Bitcoin conference in Nashville in July. That’s according to a report by Axios.
A bill aimed at protecting kids online “has the greatest momentum of any broad tech industry legislation in years,” according to The New York Times. The report suggests this is in part because dozens of parents who say they have lost children due to social media-related incidents or harms have organized lobbying campaigns in support of the legislation, called the Kids Online Safety Act. Separately, US Surgeon General Vivek Murthy said last week that he would push Congress to require tobacco product-style warning labels on social media platforms to advise parents of the platforms’ risks to their children’s mental health.
The Biden Administration plans to ban the sale of antivirus software made by Russia-based Kaspersky Labs, according to Reuters. Secretary of Commerce Gina Raimondo said Russia’s influence over the company poses a significant security risk.
Something weird happened between the crypto exchange Kraken and cybersecurity firm CertiK. Did Certik “extort” Kraken? That’s what Kraken’s chief security officer Nick Percoco alleged when he took to Twitter/X to detail the back-and-forth he had with a “security researcher” who had reported a vulnerability in Kraken’s system the week before. Apparently, the researcher had found a hole in Kraken’s software that “allowed a malicious attacker, under the right circumstances, to initiate a deposit onto our platform and receive funds in their account without fully completing the deposit,” Percoco explained.
According to Percoco, the researcher used the vulnerability to credit $4 to their account. That would have been enough to prove the flaw and qualify for a bug bounty. But Percoco said, the researcher then disclosed it to two other individuals, and the three accounts proceeded to take more money and eventually withdraw $3 million. When Kraken requested the $3 million back along with “a full report” of the researchers’ activities, the researchers refused unless Kraken provided a “speculated $ amount that this bug could have caused if they had not disclosed it,” he said.
CertiK eventually revealed that the researchers were its employees and returned the funds. But by then a public Twitter spat had erupted. CertiK says it took the cash in multiple lots over five days because it wanted to “test the limit” of Kraken’s risk controls. But other security researchers said $3 million was way too much for a white hat hacker to take.
There’s yet another twist—online sleuths have been on the case, pointing to a transaction trail behind the scenes that shows that a CertiK-linked account also sent some of the money taken from Kraken to the crypto protocol Tornado Cash, which has been sanctioned for money laundering by the US Department of the Treasury. CertiK, which is based in the US, hasn’t explained those transactions. It could be that more official forces than your local onchain detective will find this one interesting.
Follow us on Twitter or get corporate with us on LinkedIn—if you want.