How to stop Perplexity and save the web from bad AI
We can still have the internet we want — but we have to try new business models
I.
For a while now, I’ve been gloomy about the state of the web. Plagiarism engines like Perplexity and Arc Search have attracted millions of users by ripping off other people’s work, depriving publishers of the traffic and advertising revenue that once sustained them. The results have been successful enough that Google is following them.
Today, I want to talk about a more positive vision for the future of the internet — one where AI companies and creators work hand in hand to grow the web again, sharing the wealth they create with one another.
Before I get there, though, it’s worth taking a moment to reflect on how bad the status quo has gotten.
Earlier this month, Forbes noticed that Perplexity had been stealing its journalism. The AI startup had taken a scoop about Eric Schmidt’s new drone project and repurposed it for its new “pages” product, which creates automated book-report style web pages based on user prompts. Perplexity had apparently decided to take Forbes’ reporting to show off what its plagiarism can do.
Here’s Randall Lane, Forbes’ chief content officer, in a blog post.
“Not just summarizing (lots of people do that), but with eerily similar wording, some entirely lifted fragments — and even an illustration from one of Forbes’ previous stories on Schmidt,” noted “More egregiously, the post, which looked and read like a piece of journalism, didn’t mention Forbes at all, other than a line at the bottom of every few paragraphs that mentioned “sources,” and a very small icon that looked to be the “F” from the Forbes logo – if you squinted. [...]
Perplexity then sent this knockoff story to its subscribers via a mobile push notification. It created an AI-generated podcast using the same (Forbes) reporting — without any credit to Forbes, and that became a YouTube video that outranks all Forbes content on this topic within Google search.
Any reporter who did what Perplexity did would be drummed out of the journalism business. But CEO Aravind Srinivas attributed the problem here to “rough edges” on a newly released product, and promised attribution would improve over time. “We agree with the feedback you've shared that it should be a lot easier to find the contributing sources and highlight them more prominently,” he wrote in an X post.
In person, Srivinas can come across as earnest and a bit naive, as I learned when he came on Hard Fork in February. But any notion that Perplexity’s problems stem from a simple misunderstanding was dashed this week when Wired published an investigation into how the company sources answers for users’ queries. In short, Wired found compelling evidence that Perplexity is ignoring the Robots Exclusion Protocol, which publishers and other websites use to grant or deny permissions to automated crawlers and scrapers.
Here are Dhruv Mehrotra and Tim Marchman:
Until earlier this week, Perplexity published in its documentation a link to a list of the IP addresses its crawlers use—an apparent effort to be transparent. However, in some cases, as both Wired and Knight were able to demonstrate, it appears to be accessing and scraping websites from which coders have attempted to block its crawler, called Perplexity Bot, using at least one unpublicized IP address. The company has since removed references to its public IP pool from its documentation. [...]
Wired verified that the IP address in question is almost certainly linked to Perplexity by creating a new website and monitoring its server logs. Immediately after a Wired reporter prompted the Perplexity chatbot to summarize the website's content, the server logged that the IP address visited the site. This same IP address was first observed by Knight during a similar test.
Forbes sent Perplexity a cease-and-desist letter, and I imagine it won’t be the last publisher to do so. There are open legal questions about whether copyrighted material can be used to train large language models or answer chatbot queries, but I see no legal way Perplexity can get away with one of its other core techniques for building pages: using copyrighted images from Getty, the Wall Street Journal, Forbes and others. You simply are not allowed to re-publish other people’s copyrighted photos and illustrations without permission, even if your plagiarism engine is new and has “rough edges.”
Perhaps Perplexity will clean up its act; once it came under fire, the company ran to Semafor to promise that it is “working on” deals with publishers. In the meantime, though, I’ve come to think of it as the Clearview AI of generative artificial intelligence companies: scraping billions of pieces of data without permission and daring courts to stop it.
Like Clearview, Perplexity’s core innovation is ethical rather than technical. In the recent past, it would have been considered bad form to steal and repurpose journalism at scale. Perplexity is making a bet that the advent of generative AI has somehow changed the moral calculus to its benefit.
“I think we need to work together to build all these things, rather than trying to see it as, hey, like you’re taking my stuff and using it,” Srinivas told us in February.
But then he just kept taking everyone’s stuff and using it. The working together part, I guess, is meant to come later.
II.
One path forward for the web, as I shared on a recent episode of Search Engine, is the Fediverse. Decentralized, federated apps; portable identities and follower graphs; permissionless innovation on open protocols: this is a way journalists can once again begin to build audiences — stable ones! — rather than simply courting traffic. This is a years-long project, and I can only barely see the outlines of it taking shape. But it’s an appealing alternative to a world where all content is subsumed into a large language model and accessed by an opaque and proprietary set of algorithms.
But this is a long-term solution, and a partial one. And it carries with it the embedded assumption that today’s AI systems cannot be reshaped in ways that actually grow the web, and pay for the labor of the people who make it. The Fediverse is about giving up on the consumer internet as we know it today — the big walled gardens, the metastasizing LLMs — and trying to build something different.
Tim O’Reilly is thinking differently. As a publisher, investor, and open source advocate, O’Reilly sits at the intersection of many of the business problems and opportunities presented by AI. On Tuesday, he offered his solution to parasitic companies like Perplexity: developing new business models for AI companies that pay creators based on the amount of material that the companies use.
O’Reilly is starting with his own publishing business, sharing a portion of subscription revenue with (or paying a fixed fee to) authors when it uses AI to generate summaries, test questions, translations, or other derivative works based on their writing.
When someone reads a book, watches a video, or attends a live training, the copyright holder gets paid. Why should derivative content generated with the assistance of AI be any different? Accordingly, we have built tools to integrate AI-generated products directly into our payment system. This approach enables us to properly attribute usage, citations, and revenue to content and ensures our continued recognition of the value of our authors’ and teachers’ work.
And if we can do it, we know that others can too.
To O’Reilly, this view of AI is a natural extension of the modern web, which is built on what he calls an “architecture of participation.” The earlier web consisted of giant walled gardens like AOL and MSN, which sought to keep as much activity within their own borders as possible. In this view, companies like Google, OpenAI, and Perplexity are all competing to become the next AOL. It is a vision in which most of the benefits of AI are reaped by a very small number of companies.
But this would be a mistake, he writes, if only because the current AI business models are ultimately self-defeating. “If the long-term health of AI requires the ongoing production of carefully written and edited content — as the currency of AI knowledge certainly does — only the most short-term of business advantage can be found by drying up the river AI companies drink from,” O’Reilly writes. “Facts are not copyrightable, but AI model developers standing on the letter of the law will find cold comfort in that if news and other sources of curated content are driven out of business.”
We know that AI companies are running out of data to train their frontier models on. Given that fact, it seems ludicrous that companies like Perplexity are building systems that all but ensure they will have less data to train on in the future.
O’Reilly is taking the opposite approach. And while it remains to be seen whether the average writer on his platform benefits meaningfully from AI royalties, if nothing else he has gotten the incentive structure right. Pay people to create high-quality writing and other content; use that content with permission to train powerful AI systems; and share the wealth that those systems create to fund and incentivize the production of further high-quality writing.
If Srinivas meant it when he said he “we need to work together to build all these things,” he can now look to O’Reilly for a powerful example of what working together actually looks like.
On the podcast this week: Kevin and I debate the surgeon general's push for a warning about teens and social media. Then, Renee DiResta — most recently of the Stanford Internet Observatory — stops by to discuss what happened and tell us about her new book, Invisible Rulers. Plus: the Times' David Yaffe-Bellany joins to explain how crypto money is shaking up the 2024 election.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
Governing
- TikTok’s first brief against the divest-or-ban law claims that the law is unconstitutional and is a result of “political demagoguery.” (Drew Harwell / Washington Post)
- Instagram regularly recommends sexual content to accounts for teenagers that seem interested in racy content, a series of tests found. (Jeff Horwitz / Wall Street Journal)
- A Q&A with Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board and former prime minister of Denmark, on deepfakes and the conquences of the board’s rulings. (Murad Ahmed / Financial Times)
- People have reportedly resorted to small claims courts to recover their Facebook, Instagram, and WhatsApp accounts as users grow frustrated with Meta’s atrocious customer service. (Karissa Bell / Engadget)
- US Surgeon General Vivek Murthy is wrong about social media warning labels, this author argues. (Mike Masnick / Daily Beast)
- Snap agreed to a $15 million settlement in a probe by the California Civil Rights Department, following allegations of discrimination, retaliation, and sexual harassment by female employees. (Lara Korte / Politico)
- California governor Gavin Newsom says he wants to restrict the use of smartphones during the school day for children and teens. (Justine Calma / The Verge)
- Amazon was fined $5.9 million by the California Labor Commissioner’s Office, for violating a state law aimed at preventing workers from putting their health and safety at risk by working too quickly. (Caroline O’Donovan / Washington Post)
- Amazon software is reportedly scanning the faces of thousands of people catching trains in the UK. (Matt Burgess / WIRED)
- Two new New York laws will require parental consent for social media companies to use “addictive feeds” – recommendation algorithms – for kids under 18, and limit collection and sale of data on minors. (Lauren Feiner / The Verge)
- Pornhub parent company Aylo is geo-blocking users in Kentucky and Indiana due to new age verification laws. (Michael McGrady Jr. / AVN)
- An EU vote scheduled to amend a draft law that would have required WhatsApp and Signal to scan messages for potential child sexual abuse material was reportedly canceled over encryption concerns. (Clothilde Goujard / Politico)
- Apple’s AI push is facing an obstacle in China, as ChatGPT isn’t available in the country, forcing the company to search for a local partner. (Raffaele Huang and Jiyoung Sohn / Wall Street Journal)
- Tech companies like Google and OpenAI are reportedly stepping up their personnel screening processes amid growing concerns of Chinese espionage. (Tabby Kinder, Stephen Morris and Demetri Sevastopulo / Financial Times)
- Neo-Nazis and extremists are weaponizing AI tools to spread hate speech, recruit new members, and radicalize online supporters. (David Gilbert / WIRED)
- Airbnb weakened its policies against hate groups and extremists last year and dissolved a team tasked with removing them from the platform, a whistleblower alleged. (Brandy Zadrozny / NBC News)
Industry
- OpenAI co-founder Ilya Sutskever is starting a new venture, Safe Superintelligence Inc., aiming to create a safe AI system with no near-term intentions of selling a product. (Ashlee Vance / Bloomberg)
- Anthropic’s new Claude 3.5 Sonnet generative AI model is its new best-performing model. An answer to GPT-4o has impressed early testers. (Kyle Wiggers / TechCrunch)
- A new feature, Artifacts, will let users see and interact with the results of Claude requests. (David Pierce / The Verge)
- TikTok built an in-app experience for Taylor Swift’s Eras Tour, letting users complete Swift-themed challenges to get digital profile frames and create friendship bracelets. (Taylor Lorenz / Washington Post)
- Meta is restructuring Reality Labs, its hardware division, which is now merging into two groups – metaverse and wearables. Some employees in the division were laid off. (Alex Heath / The Verge)
- There is now an option to restrict Instagram Live streams to Close Friends. (Kris Holt / Engadget)
- Amazon plans to invest 10 billion euros to invest in cloud infrastructure in Germany. (Reuters)
- Oculus and Anduril founder Palmer Luckey announced a new headset that is “driven by military requirements,” but could be used for non-military things. (Adi Robertson / The Verge)
- Inside Rebind, a new app that combines AI-generated commentaries with human insight for a new way of reading. (Laura Kipnis / WIRED)
- AI isn’t good at writing jokes, an experiment with 20 comedians found. (Rhiannon Williams / MIT Technology Review)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and AI business models: casey@platformer.news and zoe@platformer.news.