Plus Icon
Todd Spangler
NY Digital Editor
xpangler See All
Getty Images Last week, OpenAI CEO Sam Altman broke the news to newly installed Disney CEO Josh D’Amaro that the AI giant was killing off the Sora video-gen platform — before Disney was able to launch its characters in the system. Altman felt “terrible” about it, but added that Disney and OpenAI are still looking to work together, the tech CEO said in his first interview since the Sora announcement.
About the decision to shutter Sora, Altman said it boiled down to resource allocation — that OpenAI needed to “concentrate our compute and our product capacity into these next generation of automated researchers and companies,” he said, speaking on iHeartPodcasts’ “Mostly Human” hosted by tech journalist Laurie Segall.
Popular on Variety
Disney, which was reportedly caught off-guard by OpenAI’s sudden Sora decision, has shelved its plans to invest $1 billion in the AI company. Under their original agreement, fans were going to be able to use the Sora app to create their own AI versions of 200-plus Disney characters like Iron Man, Cinderella and Mickey Mouse licensed to OpenAI that would be populated on Disney+.
Altman told Segall: “I love Sora. I love generated videos and I love our partnership with Disney, and we’re working hard with them to find a world where they can still do something amazing and we can help with that.”
Altman said he personally called D’Amaro about the Sora shutdown decision. “The very first thing that the new Disney CEO Josh said to me, and I felt, like, terrible… He’s like, ‘I get it.’ But it’s super sad always to disappoint a partner or users or a team, all of which are doing incredible work.”
In addition to the licensing deal for Sora, the original Disney-OpenAI agreement included multiple elements, including that Disney would become a major customer of OpenAI, using its APIs to build new products, tools and experiences, including for Disney+, and deploying ChatGPT for its employees. At this point, Disney is reevaluating all aspects of that agreement.
He continued, “There are like many hard parts about being a CEO that you don’t get sympathy for… but one of them is, like, you have to like make a lot of like very tough resourcing calls and a lot of good things get caught up in that because they’re not the most important thing.”
According to Altman, OpenAI has “a few times” in its history “realized something really important is working, or about to work so well, that we have to stop a bunch of other projects.” In fact, he said, that’s what happened with the GPT-3 large language model, which it has phased out. “We had a whole portfolio of bets at the time. A lot of them were working well. We shut down many projects that were working well like robotics, which we mentioned, so that we could concentrate our compute, our researchers, our effort into this thing that we said, ‘OK, there’s a very important thing happening.'”
Said Altman, “I did not expect three or six months ago to be at this point we’re at now where something very big and important is about to happen again with this next generation of models and the agents they they can power.”
In the interview with Segall, Altman also weighed in why he believes governments — not tech companies — should ultimately set the rules for AI.
“One of the most important questions the world will have to answer in the next year is, Are AI companies or are governments more powerful?” he said. “And I think it’s very important that the governments are more powerful. The future of the world, and the decisions about the most important elements of national security should be made through a democratically elected process. And the people that have been appointed as part of that process, not me, and not the CEO of some other lab.”
Altman continued, “I don’t think it works for our industry to say, ‘Hey, this is the most powerful technology humanity has ever built. It is going to be the high-order bit in geopolitics. It is going to be the greatest cyberweapon the world has ever built. It is going to, you know, be the determinant of future wars and protection. And we are not giving it to you.'”
The OpenAI CEO also said there is a group of “loud people online who really don’t trust the government to follow the law. And that feels like a very bad sign for our democracy… I realize [governments are] not perfect and some things are gonna get screwed up, and I think we have a system of checks and balances, but I mostly trust it.”
Segall asked Altman’s thoughts about news that a federal judge last week issued a preliminary injunction against the Pentagon’s action to label Anthropic’s AI a threat to the supply chain. In the ruling, the judge said it appeared to be “classic First Amendment retaliation.”
Altman responded, “I’ve said all the way through that we thought, like publicly, privately, loudly, that we thought the government doing anything… against Anthropic was really bad. We were trying to help provide an off-ramp there. I think that’s like a very bad thing.” Altman said his message to both sides was, “Find a way to work together. like stop, stop the stuff on both, stop the escalation on both sides and find a way to work together.”
RELATED: OpenAI Just Spiked Bob Iger’s Final Big Strategic Deal. For Disney, Maybe That’s Lucky
Jump to Comments-
Sam Altman Felt ‘Terrible’ Telling Disney CEO Josh D’Amaro About OpenAI’s Decision to Kill Off Sora, Says Companies Still Looking to Collaborate
-
‘Game of Thrones’ Play ‘The Mad King’ Sets July Premiere Date, Reveals Creative Team
-
Director Behind Kevin Spacey’s First Post-Acquittal Film ‘Control’ to Direct World War II Feature (EXCLUSIVE)
-
Andrew Mohsen Appointed El Gouna Film Festival Artistic Director, Replacing Marianne Khoury
-
Fan Bingbing, Yakusho Koji to Receive Honors at Far East Film Festival; Anthony Chen’s ‘We Are All Strangers’ to Open
-
Emma Corrin to Executive Produce Short Film ‘Post,’ Directed by Alice Wordsworth and Starring Omari Douglas, India Shaw-Smith and Flora Ashton (EXCLUSIVE)