OpenAI loses its voice
The company hasn’t been the same since Sam Altman’s return — and its treatment of Scarlett Johansson should worry everyone
OpenAI hasn't been the same company since Sam Altman returned as its CEO.
It’s a thought that has occurred to me repeatedly over the past week as the company has found mounting reasons to stop, reverse course, and apologize over product and personnel issues. And it’s one that feels increasingly urgent in a world where the company’s technology is poised to serve as the artificial intelligence backbone on both Microsoft and Apple devices.
On Monday, actress Scarlett Johansson revealed that the OpenAI CEO had been negotiating with her for the better part of a year to lend her voice to ChatGPT. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI,” Johansson said in a statement published by NPR’s Bobby Allyn. “He said he felt that my voice would be comforting to people.”
Johansson declined to participate. But when OpenAI unveiled voices for ChatGPT in September, users were struck by the uncanny similarity between the voice it called Sky and Johansson’s. And it should not have been a surprise: Altman has said that Her, in which Johansson voices an AI assistant with whom the film’s protagonist falls in love, is his favorite movie. Last week, after OpenAI showcased an updated version of Sky that takes advantage of its latest models, Altman posted the movie’s title suggestively on X.
“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said.
But there is even more to the story, Johansson wrote.
“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider,” she wrote. “Before we could connect, the system was out there.”
And so even as Altman attempted to renegotiate with Johansson behind the scenes, he intentionally drew attention to Sky’s similarity to Johansson in Her to his nearly 3 million Twitter followers — and released the product before finishing the negotiations.
In the moment, it looked like a grand success. The OpenAI team’s demonstration of a warmer, more emotional chatbot last week was one of the most impressive tech demos I’ve seen. In the end, I found its aggressive flirting unsettling, and seemingly at odds with Altman’s previous statements to me that the company would not pursue AI tools that doubled as romantic companions. But the demo unquestionably succeeded in captivating the tech world — and stealing thunder from Google on the eve of its annual developer conference.
Behind the scenes, though, Johansson was starting to ask questions. She hired attorneys, “who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice,” she wrote.
That did the trick: on Sunday night, OpenAI announced it would pull the Sky voice from ChatGPT. It also put up a blog post outlining the casting and recording process for the five voices it eventually selected for ChatGPT, Sky included.
OpenAI has taken pains to frame all of this as a big misunderstanding. CTO Mira Murati told reporters last week that Sky is not intended to sound like Johansson. And earlier on Monday, the company’s model behavior lead, Joanne Jang, told The Verge that “We’ve been in conversations with ScarJo’s team because there seems to be some confusion.”
OpenAI did not respond to my requests for comment today. But Johansson’s statement suggests that in reality, there was never any legitimate confusion about what was going on here. Altman asked to license her voice to realize the fantasy of a real-life Her; she declined; and the company proceeded anyway with a voice as similar to hers as it could get.
“I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected,” Johansson said.
II.
Johansson is one of the world’s most famous actresses, and she speaks for an entire class of creatives who are now wrestling with the fact that automated systems have begun to erode the value of their work. OpenAI’s decision to usurp her voice for its own purposes will now get wide and justified attention.
At the same time, it’s possible that Johansson’s experience wasn’t even the most important thing that happened at OpenAI in the past week.
Last Tuesday, after all, Ilya Sutskever announced he was leaving the company. Sutskever, a co-founder of the company who is renowned both for his technical ability and his concerns about the potential of AI to do harm, was a leader of the faction that briefly deposed Altman in November. Sutskever reversed his position once it became clear that the vast majority of OpenAI employees were prepared to quit if Altman did not return as CEO. But Sutskever hasn’t worked for the company since.
In an anodyne public statement, Sutskever said only that he now plans to work on “a project that is very personally meaningful to” him.
Why not say more about the circumstances of his departure? Vox reported over the weekend that OpenAI’s nondisclosure agreements forbid criticizing the company for the duration of the former employee’s lifetime; forbids them from acknowledging the existence of the NDA; and forces them to give up all vested equity in the company if they refuse to sign it. (After the story drew wide attention, on Saturday Altman apologized and said the company would remove the provision about clawing back equity for violating the NDA.)
As OpenAI’s chief scientist, Sutskever also led its so-called “superalignment team,” which it formed last July to research ways of ensuring that advanced AI systems act safely and in accordance with human intent. Among other things, the company promised to dedicate 20 percent of its scarce and critical computing power to the project.
But Altman’s ouster led to wide disparagement of AI safety efforts, which were wrongly identified as the reason his co-workers wanted to fire him. (The actual-if-vague reason was that he was not “consistently candid in his communications,” according to the company’s former board, a charge that seems worth revisiting in light of Johansson’s experience with him.)
And so it perhaps should not be surprising that, upon his return, safety efforts were deprioritized. The superalignment team has been disbanded after less than a year, with its remaining employees absorbed into other teams. Jan Leike, who led the superalignment team under Sutskever, quit on Friday, saying OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
In response, Altman and OpenAI president Greg Brockman issued a long statement that gestured toward building things responsibly without addressing Laike’s criticism directly.
III.
What to make of all this?
On one hand, it’s not unusual for a fast-growing startup to go through some tumult as it scales up. On the other, OpenAI was not founded to be a typical fast-growing startup: it was founded to be a nonprofit research lab focused on the safe development of superhuman intelligence.
Since the release of ChatGPT — and much more so since Altman’s return from exile — OpenAI has looked like anything but. It has consistently pushed the boundaries of what AI can do, both from a technical standpoint and from the vantage of what it believes society can handle.
And last week, determined to show up Google on the eve of I/O, the company went through with its demo of a Johansson voice clone, against Johansson’s explicit wishes, because … Altman is a fan?
For the moment, the only cost to OpenAI from any of this has been a torrent of aggrieved social media posts. On Monday its APIs were at the center of a range of new PCs unveiled by its largest investor, Microsoft; next month they are expected to be announced as part of a big new partnership with Apple. One analysis last week suggested that the company’s mobile app revenue grew 22 percent the day it showcased the not-Johansson voice demo.
For years now, OpenAI told everyone that these were all secondary concerns — that its deeper ambition was something nobler, and more public-spirited. But since Altman’s return, the company has been telling a different story: a story about winning at all costs.
And why bother with superalignment, when there’s winning to do?
Why bother getting actresses’ permission, when the right numbers are all still going up?
Sponsored
Simplify your startup’s finances with Mercury
As a founder of a growing startup, you’re focused on innovating to attract customers, pinpointing signs of early product-market fit, and securing the funds necessary to grow. Navigating the financial complexities of a startup on top of it all can feel mystifying and incredibly overwhelming. More than that, investing time into becoming a finance expert doesn’t always promise the best ROI.
Mercury’s VP of Finance, Dan Kang, shares the seven areas of financial operations to focus on in The startup guide to simplifying financial workflows. It details how founders and early teams can master key aspects, from day-to-day operations like payroll to simple analytics for measuring business performance. Read the full article to learn the art of simplifying your financial operations from the start.
*Mercury is a financial technology company not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust®; Members FDIC. Platformer has been a Mercury customer since 2020.
Governing
- Chatbots can easily be manipulated to spread election disinformation, this experiment found. (Jeremy White / New York Times)
- Election officials in Arizona are role-playing possible scenarios where AI is used to disrupt elections, in anticipation of the AI threat during actual elections. (Lauren Feiner / The Verge)
- TikTok’s new policy aims to flag and remove content promoting weight loss products, including Ozempic and anabolic steroids. (Talya Minsberg / New York Times)
- Google DeepMind released a framework to evaluate AI models on their potentially dangerous capabilities, and plans to implement auditing tools by 2025. (Reed Albergotti / Semafor)
- The FBI arrested a man that allegedly used Stable Diffusion to generate thousands of realistic images of child sexual abuse. (Samantha Cole / 404 Media)
- AI-generated children on TikTok and Instagram are drawing predators with a sexual interest in minors, this investigation found. Experts say this content is a gateway to actual CSAM. (Alexandra S. Levine / Forbes)
- Lawmakers for a new Vermont data privacy bill pushed back tech lobbyists by asking lawmakers from Maine and Oklahoma for advice. (Alfred Ng / Politico)
- US officials are concerned about undersea cables that carry internet traffic being interfered with by state-controlled Chinese repair ships, State Department officials say. (Dustin Volz, Drew Fitzgerald, Peter Champelli and Emma Brown / Wall Street Journal)
- Apple’s 27 percent fee on purchases made outside of the App Store is actually a good-faith attempt at compliance, executive Philip Schiller told a US judge. Uh huh. (Leah Nylen / Bloomberg)
- Apple has reportedly limited the development and testing of third-party browsers in the EU, after being forced by the Digital Markets Act to allow them on its mobile devices. (Thomas Claburn / The Register)
- The UK is working on plans to increase transparency over how AI models are trained, following concerns about copyright and lack of compensation for artists. (Daniel Thomas / Financial Times)
- The French government blocked TikTok in one of its overseas territories, New Caledonia, in response to protests about a new voting law. (Clothilde Goujard and Océane Herrero / Politico)
- The elections in India are seeing more audio and video deepfakes than before, often used by political candidates to reach voters. (Nilesh Christopher / WIRED)
- Meta approved a series of AI-manipulated political ads that spread disinformation about India’s election and incited religious violence, a report showed. (Hannah Ellis-Petersen / The Guardian)
- A new AI-generated media program, News Harvest, was created by ISIS supporters and shows how AI can help terrorist groups spread their message quickly. (Pranshu Verma / Washington Post)
- Deepfake news anchors are proliferating online, spreading pro-China disinformation and propaganda. (Dan Milmo and Amy Hawkins / The Guardian)
- While Taiwan was early to label TikTok as a national security threat, it isn’t considering a ban as the app isn’t the only disinformation source, Taiwanese lawmakers say. (Meaghan Tobin and Amy Chang Chien / New York Times)
Industry
- A conversation with Google CEO Sundar Pichai on the future of AI in Search and the complications it could bring to news publishers and others relying on Google for traffic. (Nilay Patel / The Verge)
- TikTok employees are reportedly concerned about their future, as the divest-or-ban law still looms amid the company’s lawsuit. (Juro Osawa, Qianer Liu and Kaya Yurieff / The Information)
- A look at Elon Musk’s feud with Signal and how it stemmed from a right-wing campaign against NPR. (Renee DiResta / The Guardian)
- Trump Media reported $770,500 in revenue in its first 2024 quarter, with a net loss of $327.6 million. Is that good? (Todd Spangler / Variety)
- Meta is developing “Peek”, a Snapchat and BeReal-like feature that lets users post pictures that can only be viewed once. (Aisha Malik / TechCrunch)
- LG Electronics ended its extended reality partnership with Meta, with Amazon reportedly emerging as a new partner to provide an operating system and software. (Chae-Yeon Kim / Korea Economic Daily)
- A look at Mark Zuckerberg’s plan to win the AI race by giving Meta’s technology away for free, a bid to drive down competitors’ prices and have its tech used more widely. (Salvador Rodriguez and Sam Schechner / Wall Street Journal)
- Apple News is a lifeline for many news publishers looking to increase traffic, with multiple publishers participating in partnerships to varying degrees. Surely this will be the platform that saves the media rather than constantly adjusting the terms in its own favor, right? Right? (Max Tani / Semafor)
- A profile of Microsoft CEO Satya Nadella, how he made Microsoft ten times more valuable, and his AI plans. (Jeremy Khan / Fortune)
- GPT-4o’s Chinese-token training data is riddled with spam, mostly consisting of phrases used in the contexts of gambling or pornography. (Zeyi Yang / MIT Technology Review)
- Snap is investing more aggressively in AI and machine learning, CEO Evan Spiegel said, after spending years overhauling its ad business. (Alex Barinka / Bloomberg)
- Slack trains its AI-powered features on user data, including messages, and users are opted-in by default. (Kate Irwin / PCMag)
- Inflection AI unveiled its new leadership team and its plans for more emotional AI. (Matt Marshall / VentureBeat)
- A look at online art gallery DeviantArt’s downfall through bots and greed and how it turned much of its artist community against it. (Nitish Pahwa / Slate)
- Reddit is reintroducing its awards system, which remains the same as its previous program, with some design changes. (Ivan Mehta / TechCrunch)
- 2023 had the highest number of internet shutdowns in a single year since digital rights group Access Now began monitoring the issue in 2016. (Astha Rajvanshi / TIME)
- About 38 percent of web pages from 2013 are no longer accessible, a Pew analysis found. (Athena Chapekis, Samuel Bestvater, Emma Remy and Gonzalo Rivero / Pew Research Center)
- AI that can supposedly replicate human emotions can be easily misled, this author argues, because there are no universal expressions of emotion. (Lisa Feldman Barrett / Wall Street Journal)
- Companion AI robots, particularly soft cuddly ones, could change dementia care, letting patients have a friendly companion to stay connected. (Cassandra Willyard / MIT Technology Review)
- More companies, including Microsoft, Google and Meta, are making smaller LLMs to court businesses worried about cost. (Cristina Criddle and Madhumita Murgia / Financial Times)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
They deserved better than this...
— Adam Sharp (@adamcsharp.bsky.social) May 20, 2024 at 7:16 AM
[image or embed]
Talk to us
Send us tips, comments, questions, and superalignment strategies: casey@platformer.news and zoe@platformer.news.