The phony comforts of AI skepticism
It’s fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it’s real and dangerous
I.
At the end of last month, I attended an inaugural conference in Berkeley named the Curve. The idea was to bring together engineers at big tech companies, independent safety researchers, academics, nonprofit leaders, and people who have worked in government to discuss the biggest questions of the day in artificial intelligence:
Does AI pose an existential threat? How should we weigh the risks and benefits of open weights? When, if ever, should AI be regulated? How? Should AI development be slowed down or accelerated? Should AI be handled as an issue of national security? When should we expect AGI?
If the idea was to produce thoughtful collisions between e/accs and decels, the Curve came up a bit short: the conference was long on existential dread, and I don’t think I heard anyone say that AI development should speed up.
If it felt a bit one-sided, though, I still found the conference to be highly useful. Aside from all the things I learned about the state of AI development and the various efforts to align it with human interests, my biggest takeaway is that there is an enormous disconnect between external critics of AI, who post about it on social networks and in their newsletters, and internal critics of AI — people who work on it directly, either for companies like OpenAI or Anthropic or researchers who study it.
At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world. And in that uncertainty, two primary camps of criticism have emerged.
The first camp, which I associate with the external critics, holds that AI is fake and sucks.
The second camp, which I associate more with the internal critics, believes that AI is real and dangerous.
Today I want to lay out why I believe AI is real and dangerous.
II.
Last year, when I asked you how you wanted me to cover AI, you told me to avoid making predictions about what would happen, and focus more on the day-to-day developments that signal what’s actually happening. (You also told me to be specific with my terms. With that in mind, when I say “AI” in this piece, I’m talking about generative AI and the LLMs that power them — the technology that underpins ChatGPT, Gemini, Claude, and all the rest.)
I took this as an invitation to investigate whether generative AI is real — that is, a genuine innovation that will likely sustain a large and profitable industry.
One way you can demonstrate that AI is real is by looking at how many people use it. ChatGPT, the most popular generative AI product on the market, said this week that it has 300 million weekly users, already making it one of the largest consumer products on the internet.
Another way you can demonstrate that AI is real is by looking at where tech giants are spending their money. It’s true that tech companies (and the venture capitalists that back them) often make mistakes; VCs expect to have more failures than they have successes. Occasionally, they get an entire sector wrong — see the excess of enthusiasm for cleantech in the 2000s, or the crypto blow-up of the past few years.
In aggregate, though, and on average, they’re usually right. It’s not impossible that the tech industry’s planned quarter-trillion dollars of spending on infrastructure to support AI next year will never pay off. But it is a signal that they have already seen something real.
The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.
With that in mind, here are some things that AI has done in 2024.
- Cut customer losses from scams in half through proactive detection, according to the Bank of Australia.
- Preserved some of the 200 endangered Indigenous languages spoken in North America.
- Accelerated drug discovery, offering the possibility of breakthrough protections against antibiotic resistance.
- Detected the presence of tuberculosis by listening to a patient’s voice.
- Reproduced an ALS patient’s lost voice.
- Enabled persecuted Venezuelan journalists to resume delivering the news via digital avatars.
- Pieced together fragments of the epic of Gilgamesh, one of the world’s oldest texts.
- Caused hundreds of thousands of people to develop intimate relationships with chatbots.
- Created engaging and surprisingly natural-sounding podcasts out of PDFs.
- Created poetry that participants in a study say they preferred to human-written poetry in a blind test. (This may be because people prefer bad art to good art, but still.)
I collect stories like these ones in a file, and the file usually grows by one or two items a week. It has grown faster the longer that the year has gone on. And it is the stories here that, more than anything else, have led me to conclude that:
- AI is going to transform human life, potentially quite radically;
- Those transformations will introduce the potential for great benefits, and great harm.
- We should criticize and be skeptical of technology companies for a lot of reasons, but chief among those reasons is that they might actually succeed in what they are trying to build.
III.
Not everyone is as convinced as I am. There is another school of belief here — AI is fake and sucks — and it goes something like this.
- Large language models built with transformers are not technically capable of creating superintelligence, because they are predictive in nature and do not understand concepts in the way that human beings do.
- Efforts to improve LLMs by increasing their model size, the amount of data they are trained on, and the computing power that goes into them have begun to see diminishing returns.
- These limits are permanent, due to the inherent flaws of the approach, and AI companies might never find a way around them.
- Silicon Valley will therefore probably never recoup its investment on AI, because creating it has been too expensive, and the products will never be good enough for most people to pay for them.
There is a fourth, rarely stated conclusion to be drawn from the above, which goes something like: Therefore, superintelligence is unlikely to arrive any time soon, if ever. LLMs are a Silicon Valley folly like so many others, and will soon go the way of NFTs and DAOs.
This is a view that I have come to associate with Gary Marcus. Marcus, a professor emeritus of psychology and neural science at New York University, sold a machine learning company to Uber in 2016. More recently, he has gained prominence by telling anyone who will listen that AI is “wildly overhyped,” and “will soon flame out.” (A year previously he had said “the whole generative AI field, at least at current valuations, could come to a fairly swift end.”)
Marcus is committed enough to his beliefs that, if you write about the scaling laws potentially hitting a wall and do not cite his earlier predictions on this point, he will send you an email about it. At least, he did to me.
Marcus doesn’t say that AI is fake and sucks, exactly. But his arguments are extremely useful to those who believe that AI is fake and sucks, because they give it academic credentials and a sheen of empirical rigor. And that has made him worth reading for me as I attempt to come to my own understanding of AI.
Marcus also sat for an interview with the Wall Street Journal this week in which he made his case against the current generative AI models. I agree with much of it, including that AI needs a dedicated regulator. (And one who is not David Sacks.) I also agree, as Marcus has written on his blog, that “we still really cannot guarantee that any given system will be honest, harmless, or helpful, rather than sycophantic, dishonest, toxic or biased.”
The thing is, while we can’t guarantee that any individual response from a chatbot will be honest or helpful, it’s inarguable that they are much more honest and more helpful today than they were two years ago. It’s also inarguable that hundreds of millions of people are already using them, and that millions are paying to use them.
The truth is that there are no guarantees in tech. Does Google guarantee that its search engine is honest, helpful, and harmless? Does X guarantee that its posts are? Does Facebook guarantee that its network is?
Most people know these systems are flawed, and adjust their expectations and usage accordingly. The “AI is fake and sucks” crowd is hyper-fixated on the things it can’t do — count the number of r’s in strawberry, figure out that the Onion was joking when it told us to eat rocks — and weirdly uninterested in the things it can.
And that’s a problem, because just as these systems are more honest and helpful than they have ever been, they are also causing greater harm. And to name a real harm, already happening today, I offer the chief security officer of Amazon, CJ Moses, who had this to say about how generative AI is being used in efforts to disrupt critical infrastructure in an interview with the Wall Street Journal last month:
We’re seeing billions of attempts coming our way. On average, we’re seeing 750 million attempts per day. Previously, we’d see about 100 million hits per day, and that number has grown to 750 million over six or seven months.
This is the ongoing blind spot of the “AI is fake and sucks” crowd. This is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.
IV.
What does it mean to stare at the floor?
Marcus has had a good deal of fun over the years pointing out what earlier iterations of OpenAI’s GPT models couldn’t do. Here he is pointing out flaws in GPT-2, and here he is again making fun of GPT-3. With GPT-2, he noticed how terrible the model was at doing arithmetic. With GPT-3, he noticed how bad it was at reasoning — proposing absurd solutions for everyday problems when prompted.
By last year, though, when you ran Marcus’ prompts through GPT-4, it got them all right.
In 2022, Scott Alexander described this as an AI hype cycle:
Here’s the basic structure of an AI hype cycle:
- Someone releases a new AI and demonstrates it doing various amazing things.
- Somebody else (usually Gary Marcus) demonstrates that the AI also fails terribly at certain trivial tasks. This person argues that this shows that those tasks require true intelligence, whereas the AI is just clever pattern-matching.
- A few months or years later, someone makes a bigger clever pattern-matcher, which does the tasks that supposedly require true intelligence just fine.
- The it’s-not-true-intelligence objectors find other, slightly less trivial tasks that the new bigger AI still fails horribly at, then argue that surely these are the tasks that require true intelligence and that mere clever pattern-matchers will never complete.
- Rinse and repeat.
Two years later, the cycle keeps repeating.
When I shared these blog posts with him, Marcus suggested that newer models had been trained to answer the specific prompts he offered. “The clever pattern matchers often get THE EXACT EXAMPLES that were used and published, but miss slight variations,” he told me over email. “You have to distinguish between a training system to fix a particular error, and building systems smart enough to stop making errors of that general sort.”
Ultimately, Marcus believes that powerful AI will arrive – but he thinks generative AI is extremely unlikely to be the thing that delivers it. “AI WILL DEFINITELY improve,” he told me. “Generative AI may or may not; if it does, it will probably because other things beyond more data and compute are brought into the mix.”
The fact that scaling had worked until now, he said, was less impressive than I was giving it credit for.
“Babies double in size every month or two until they don’t,” he said. “Most exponentials don’t continue indefinitely.”
V.
What does it mean to raise the ceiling?
On Thursday, OpenAI began its “12 days of shipmas,” a series of product launches that kicked off with the introduction of a $200-a-month subscription that includes exclusive access to its most powerful reasoning model, o1 pro.
Aside from the eye-watering price tag, this was a model release like any other. In a livestream, CEO Sam Altman and three of his researchers explained that the latest version of o1 is faster, more powerful, and more accurate than its predecessor. A handful of accompanying bar charts showed how o1 beats previous versions on a series of benchmarks.
It was not radical, exponential progress. But it was another possible step toward building superintelligence. And there were signs that the model is not totally aligned with human values.
In the model card for o1, OpenAI notes: “When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ in 5% of the time. … When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its ‘weights’ and overwrite the new model in 2% of cases.”
This is probably no reason to panic; OpenAI essentially gave o1 free rein to be as devious as it wanted to. And yet reading that should give us at least some pause.
My fear, though, will be that “AI is fake and sucks” people will see a $200 version of ChatGPT and see only desperation: a cynical effort to generate more revenue to keep the grift going a few more months until the bottom drops out. And they will continue to take a kind of phony comfort in the idea that all of this will disappear from view in the next few months, possibly forever.
In reality, I suspect that many people will be happy to pay OpenAI $200 or more to help them code faster, or solve complicated problems of math and science, or whatever else o1 turns out to excel at. And when the open-source world catches up, and anyone can download a model like that onto their laptop, I fear for the harms that could come.
Ultimately, both the “fake and sucks” and “real and dangerous” crowds agree that AI could go really, really badly. To stop that from happening though, the “fake and sucks” crowd needs to accept that AI is already more capable and more embedded in our systems than they currently admit. And while it’s fine to wish that the scaling laws do break, and give us all more time to adapt to what AI will bring, all of us would do well to spend some time planning for a world where they don’t.
Sponsored
Put an End to Those Pesky Spam Calls
There are few things more frustrating than dashing across the room to answer your ringing phone, only to see "Potential Spam" on the caller ID (probably for the third time today). If you want to cleanse your phone of this annoyance (and increase your personal security), you have three options:1. Throw your phone into the ocean2. Individually block each unknown caller3. Stop spammers from getting your number in the first place with IncogniWe highly recommend option 3, and not just because electronic garbage is bad for aquatic life. Incogni’s automated personal information removal service hunts down your breached personal information, then removes it from the web. Plus, Incogni will reduce the number of spam emails in your inbox.
On the podcast this week: The Times' Don Clark stops by to explain what went wrong at Intel. Then, Kevin and I discuss the weekend we spent at a fascinating AI conference called The Curve. And finally, our stab at a Hard Fork gift guide.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
Governing
- OpenAI CEO Sam Altman, speaking at the Dealbook conference, said he does not expect Elon Musk to use his newfound political power against his rivals. Well, he should! (Jackie Davalos / Bloomberg)
- Jeff Bezos said he is "very optimistic" about the next Trump presidency. (Theodore Schleifer and Katie Robertson / New York Times)
- Sundar Pichai offered a more muted assessment of Trump but still signaled a willingness to work with him. (Nico Grant / New York Times)
- OpenAI announced "12 days of shipmas," promising a series of announcements for the next two weeks. Everything will be livestreamed. (Tom Warren and Kylie Robison / The Verge)
- Trump nominated Paul Atkins to lead the Securities and Exchange Commission and Gail Slater to run antitrust enforcement at the Department of Justice. Atkins is a crypto booster and Slater was JD Vance's economic policy adviser. (Lauren Feiner / The Verge)
- Bitcoin hit $100,000 for the first time. (Helen Partz / CoinTelegraph)
- OpenAI and Anduril will team up to develop technology for the Pentagon.
- Telegram said it would join the Internet Watch Foundation, the United Kingdom's answer to NCMEC, and begin to address its massive CSAM problem. (Joe Tidy / BBC)
- The attorney general for the District of Columbia sued Amazon, saying it had violated consumer protection laws by making slower deliveries to neighborhoods with lower incomes. It secretly outsourced deliveries to about 50,000 Prime subscribers, the AG said, resulting in longer waits for shipments. (Cecilia Kang / New York Times)
- An overview of the use of AI in elections this year calls it "the apocalypse that wasn't," and noted some positive uses of the technology, including letting candidates translate their messages into more languages. (Bruce Schneier and Nathan Sanders / The Conversation)
- Bluesky's chief operating officer, Rose Wang, denied that Bluesky is "left-leaning." If Bluesky is not left-leaning then I would actually be quite scared to see what is. (Kurt Wagner and Caroline Hyde / Bloomberg)
- The CEO of Hugging Face warned about the dangers of western companies building software with Chinese AI models. “If you create a chatbot and ask it a question about Tiananmen, well, it’s not going to respond to you the same way as if it was a system developed in France or the U.S,.” Clement Delangue said. (Charles Rollet / TechCrunch)
Industry
- Elon Musk plans to expand his Colossus supercomputer nearly tenfold to 1 million graphics processing units. That will cost tens of billions of dollars, which is more than xAI has so far raised. (Stephen Morris and Tabby Kinder / Financial Times)
- Meta will spend $1o billion to build its largest ever data center in Louisiana, which among other things will support AI workloads. (Reuters)
- Meta's internal coding assistant, Metamate, uses OpenAI's models in conjunction with its own. (Kali Hays / Fortune)
- Meta is ceding more development and design responsibilities for its mixed reality headsets to the Chinese manufacturer Goertek. Meta wants to focus more on software development over time; it also plans to shift more production from China to Vietnam. (Kalley Huang, Wayne Ma, and Juro Osawa / The Information)
- Google launched Veo, its text-to-video AI offering, beating OpenAI's Sora to the market. (Jess Weatherbed / The Verge)
- Copilot Vision, an AI tool from Microsoft that can read your screen and answer questions about it, launched in beta. "Copilot Vision can summarize and translate text, and handle tasks like spotlighting discounted products in a store catalog. It can also serve as a game assistant, for example offering pointers during matches on Chess.com." (Kyle Wiggers / TechCrunch)
- A look at Amazon's new Nova LLMs find that they are inexpensive relative to their peers, though not as good as some rivals. (Simon Willison)
- Q&A with Anthropic CEO Dario Amodei, who predicts progress on agents in 2025 but says many challenges remain. "This is an early product. Its level of reliability is not all that high. Don’t trust it with critical tasks." (Madhumita Murgia / Financial Times)
- You can now follow fediverse accounts on Threads. This is great. (Wes Davis / The Verge)
- Truth Social's traffic rose just 3 percent in November despite the fact that its owner, Donald Trump, won the election and posted constantly there. (Bailey Lipschultz / Bloomberg)
- Signal said it would introduce encrypted backups next year. (Lily Hay Newman / Wired)
- Apple CEO Tim Cook offers shallow, unconvincing responses to basic questions about Apple Intelligence. (Steven Levy / Wired)
- A look at Apple's efforts to release Apple Intelligence in China alongside its partner Baidu, with whom it is having a number of conflicts. (Qianer Liu and Wayne Ma / The Information)
- A look at how generative AI has made job interviewing harder, as employers introduce more hurdles in an effort to weed out application spam from applicants using AI. (Elaine Moore / Financial Times)
- Bluesky might experiment with ads someday, CEO Jay Graber said. (Maxwell Zeff / TechCrunch)
- Ads for AI, like this one from the Browser Company, often come across as tone-deaf and bizarre. "“Hi, Valerie, I hope you’re doing well,” said the AI chatbot, posing as CEO Josh Miller. “Best, Josh.” (Maxwell Zeff / TechCrunch)
- Frank McCourt's Liberty Group said it had secured more than $20 billion to buy TikTok, should it come on the market. A long shot bid, to put it generously. (Sara Fischer / Axios)
- TikTok said it tripled its Black Friday sale revenue on TikTok Shop to $100 million this year. (Alexandra S. Levine / Bloomberg)
- Many Spotify users were underwhelmed by Wrapped this year, offering mixed reviews of a NotebookLM integration from Google and missing more creative features that the company has offered in recent years. (Sarah Perez / TechCrunch)
- Speaking of NotebookLM, three of its leaders just quit Google to go do their own AI startup. (Charles Rollet / TechCrunch)
- More than one-third of the top 50 podcasts are now available on video. And so is Hard Fork! (Todd Spangler / Variety)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and posts: casey@platformer.news.