When Emmett Shear, the former chief executive of livestreaming site Twitch, was named interim chief executive of OpenAI on Sunday night, it might have seemed a curious choice.
After graduating from college in 2005, he spent almost his entire career at Twitch, the Amazon-owned platform popular among video gamers, as it grew from a fledgling site called Justin.TV to a massive site with over 30 million daily viewers. Made, before leaving, earlier this year.
Mr. Shear, 40, an avid video gamer, was seen as a capable leader who led Twitch through several changes. But he faced criticism, including over his handling of claims in 2020 that Twitch’s workplace culture was hostile toward women, and the site’s slowness in responding to harmful content. Some employees and livestreamers also complained that his focus on leading Twitch towards profitability through cost cutting was causing the quality of the platform to degrade.
He also knows Sam Altman, who was ousted from OpenAI by its board of directors on Friday. Both were in the same group at Y Combinator, the start-up fund that invested in both of their early companies.
But in interviews and on social media, Mr. Shear has expressed a view about the risks of artificial intelligence that might appeal to OpenAI’s board members, who ousted Mr. Altman at least partly out of fear that He wasn’t paying enough. Pay attention to the potential danger posed by the company’s technology.
Appearing on a technology podcast In June, Mr. Shear expressed concern about what might happen if AI reached artificial general intelligence, or AGI, a term for human-level intelligence. He worried that at such a point, an AI system could become so powerful that it could continue to improve itself without the need for human input, and have the potential to destroy humanity.
Mr. Shear could not immediately be reached for comment Monday. In a post on Was improved.
He said, “Based on the results of what we learn from these, I will drive change across the organization – even including strongly pushing for significant governance changes if necessary.”
On the podcast, Mr. Shear discussed a concept that is often discussed in AI circles, which focuses on paper clips: in short, the idea is to let an all-powerful AI create as many paper clips as possible. It will also be possible to give a target of Rs. Take the lead in determining that the eradication of humans would be the most effective way to accomplish that goal.
“The first step is, ‘Take over the planet,’ right? Then I have control over everything. Step two is ‘I solve my goal,'” he said.
If AI reaches that point, Mr. Shearer said, the potential devastation would be like “a bomb that destroys the universe.”
“This is not just human-scale extinction; Human extinction is bad enough,” he said. “It’s like the potential destruction of all values in the light cone. “Not just for us, but also for any alien species caught after the explosion.”
Mr. Shear said he was not as worried as some AI theorists about this type of world-ending event: partly because he did not think current AI technology was close to such a breakthrough, and partly because He thought it might be possible to ensure that the goals of AI systems were consistent with the goals of humans. But they still adopted industry safety measures.
“I’m in favor of creating some kind of fire alarm, like maybe, ‘AI is no bigger than X,'” he said. “I think there are good alternatives to international cooperation and treaties about some kind of AI test ban treaty.”
In a social media post on X, Mr Shear reinforced those points, referring to oneself as a “ruiner” and suggesting that companies should put the brakes on their technological progress.
“I am in favor of recession,” He replied To another user in September. “We can’t learn to create safe AI without experimenting, and we can’t experiment without progress, but we also shouldn’t move forward at the maximum speed possible either.”