Thursday, June 26, 2025
No menu items!
HomeTechnologySam Altman comes out swinging at The New York Times

Sam Altman comes out swinging at The New York Times

From the moment OpenAI CEO Sam Altman stepped onstage, it was clear this was not going to be a normal interview.

Altman and his chief operating officer, Brad Lightcap, stood awkwardly toward the back of the stage at a jam-packed San Francisco venue that typically hosts jazz concerts. Hundreds of people filled steep theatre-style seating on Wednesday night to watch Kevin Roose, a columnist with The New York Times, and Platformer’s Casey Newton record a live episode of their popular technology podcast, Hard Fork.

Altman and Lightcap were the main event, but they’d walked out too early. Roose explained that he and Newton were planning to — ideally, before OpenAI’s executives were supposed to come out — list off several headlines that had been written about OpenAI in the weeks leading up to the event.

“This is more fun that we’re out here for this,” said Altman. Seconds later, the OpenAI CEO asked, “Are you going to talk about where you sue us because you don’t like user privacy?”

Within minutes of the program starting, Altman hijacked the conversation to talk about The New York Times lawsuit against OpenAI and its largest investor, Microsoft, in which the publisher alleges that Altman’s company improperly used its articles to train large language models. Altman was particularly peeved about a recent development in the lawsuit, in which lawyers representing The New York Times asked OpenAI to retain consumer ChatGPT and API customer data.

“The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users’ logs even if they’re chatting in private mode, even if they’ve asked us to delete them,” said Altman. “Still love The New York Times, but that one we feel strongly about.”

For a few minutes, OpenAI’s CEO pressed the podcasters to share their personal opinions about the New York Times lawsuit — they demurred, noting that as journalists whose work appears in The New York Times, they are not involved in the lawsuit.

Altman and Lightcap’s brash entrance lasted only a few minutes, and the rest of the interview proceeded, seemingly, as planned. However, the flare-up felt indicative of the inflection point Silicon Valley seems to be approaching in its relationship with the media industry.

In the last several years, multiple publishers have brought lawsuits against OpenAI, Anthropic, Google, and Meta for training their AI models on copyrighted works. At a high level, these lawsuits argue that AI models have the potential to devalue, and even replace, the copyrighted works produced by media institutions.

But the tides may be turning in favor of the tech companies. Earlier this week, OpenAI competitor Anthropic received a major win in its legal battle against publishers. A federal judge ruled that Anthropic’s use of books to train its AI models was legal in some circumstances, which could have broad implications for other publishers’ lawsuits against OpenAI, Google, and Meta.

Perhaps Altman and Lightcap felt emboldened by the industry win heading into their live interview with The New York Times journalists. But these days, OpenAI is fending off threats from every direction, and that became clear throughout the night.

Mark Zuckerberg has recently been trying to recruit OpenAI’s top talent by offering them $100 million compensation packages to join Meta’s AI superintelligence lab, Altman revealed weeks ago on his brother’s podcast.

When asked whether the Meta CEO really believes in superintelligent AI systems, or if it’s just a recruiting strategy, Lightcap quipped: “I think [Zuckerberg] believes he is superintelligent.”

Later, Roose asked Altman about OpenAI’s relationship with Microsoft, which has reportedly been pushed to a boiling point in recent months as the partners negotiate a new contract. While Microsoft was once a major accelerant to OpenAI, the two are now competing in enterprise software and other domains.

“In any deep partnership, there are points of tension and we certainly have those,” said Altman. “We’re both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come.”

OpenAI’s leadership today seems to spend a lot of time swatting down competitors and lawsuits. That may get in the way of OpenAI’s ability to solve broader issues around AI, such as how to safely deploy highly intelligent AI systems at scale.

At one point, Newton asked OpenAI’s leaders how they were thinking about recent stories of mentally unstable people using ChatGPT to traverse dangerous rabbit holes, including to discuss conspiracy theories or suicide with the chatbot.

Altman said OpenAI takes many steps to prevent these conversations, such as by cutting them off early, or directing users to professional services where they can get help.

“We don’t want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough,” said Altman. To a follow-up question, the OpenAI CEO added, “However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through.”

RELATED ARTICLES

Most Popular

Recent Comments