A Superintelligent Startup...Perplexity's Tricky Web Crawlers...Battery Ventures in the VC Directory
Plus, Trump makes his All-In debut
The Main Item
Ilya Sutskever Stands Strong on Safe AI
Ilya Sutskever, the OpenAI co-founder and former chief scientist, is taking another swing at steering AI research in a way that prioritizes safety—and, as he told Bloomberg rather pointedly, “we mean safe like nuclear safety as opposed to safe as in ‘trust and safety.’” Last week he unveiled his new startup-slash-research lab, Safe Superintelligence, which has only one goal: to create artificial general intelligence that will help us and not hurt us.
Sutskever last fall famously joined a majority of OpenAI’s non-profit board in voting to oust Sam Altman, who as head of OpenAI’s for-profit subsidiary was pursuing an aggressive commercial strategy but had lost the trust of the board. After a staff revolt Sutskever switched sides, but clearly felt OpenAI wasn’t taking safety seriously enough.
The new venture will be “fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race,” Sutskever said.
The vast sums of money that he will presumably need for building and training AI models are likely to test that pledge. His co-founders include Daniel Gross, a highly respected AI expert and investor, and Daniel Levy, a former colleague from OpenAI, but no investors have been named so far.
It was the need for cash—both to pay for compute, and to pay expensive researchers—that prompted OpenAI to all but abandon its non-profit mission. Sutskever will be asking backers to pony up for a promise that might never be realized, while vowing never to pivot.
It’s something he, uniquely, might be able to get away with. “Ilya is one of the brilliant scientists of our generation, plus the Daniels…it’s as good as a team start as you can have,” one investor texted me.
As another VC noted, it sounds a lot like a return to OpenAI’s original goals, which from an investor standpoint is a mixed bag. “Is it cool? Yah. There are definitely people who want to create AGI. Will it make money for investors? Not sure with him helming it and their current values/ focus,” this person said.
It’s also a very different moment than 2015, when OpenAI was founded, with the most well-capitalized companies on the planet now throwing many billions of dollars at every conceivable AI opportunity.
“I think the vision is in theory noble but in practice I just don’t buy that one group will be able to be so far ahead of others AND be able to protect that tech,” the first investor said. “It’s like building one recycling plant (noble) and expecting global warming to go away.”
All of that said, Sutskever deserves credit for sticking to principle, especially given that he could walk through a different door at any moment and be an instant billionaire. The effort to create safe AGI deserves at least a chance.