Our Biggest Takeaways from Cerebral Valley: No Wall on AI Progress Despite the Scaling Law Debate
What we learned from leaders at Anthropic, Scale AI, a16z, Databricks, Glean, CoreWeave, Waymo, Character.AI, Menlo Ventures & more
Two years into the generative AI era, a great debate is raging over whether improvements in large language models have hit a wall—and it played out live onstage Wednesday at the Cerebral Valley AI Summit in San Francisco.
A packed house of about 350 founders and investors first heard from Alexandr Wang of Scale AI. He said that foundation models appeared to have hit a wall.
“It seems to be the case that we’ve hit a wall on pre-training,” Wang said. “So the large-cluster training on huge amounts of internet data, that seems to have genuinely hit a wall, but we haven't hit a wall on progress in AI.”
Ali Ghodsi of Databricks said cost alone made the “bigger is better” approach to LLMs impractical, whether or not there is any kind of wall. Throughout the day, speakers emphasized the importance of post-training and specialized data sets in building applications, and questioned whether synthetic data or other techniques could substitute for the diminishing amount of fresh internet data available for LLM training.
In the day’s final session, though, Anthropic’s Dario Amodei was eager to set everyone straight. There’s no reason to believe that foundation models progress was about to start slowing down.
“I was among the first to document the scaling laws and the scaling of AI. Nothing I’ve seen in the field is out of character with what I’ve seen over the last 10 years, or leads me to expect that things will slow down,” he declared.
“I don’t think there’s any barrier…as a general matter, we’ll see better models every some number of months.”
We’ll surely be hearing a lot more about the very large question of whether generative AI can get us to Camelot, or whether fundamentally different approaches are needed.
In the meantime, though, the AI field is brimming with optimism over what can be done even with the capabilities we already have. Tim Tully of Menlo Ventures showed how enterprise adoption of AI apps was gaining pace. Founders in biotech and robotics walked attendees through the special challenges in those fields. Agents were on everybody’s mind.
Politics, mercifully, was less front-and-center this time around, though Wang spoke of the importance of winning the AI race with China.
For his part, Amodei said he was in favor of tightly targeted regulation of AI. According to him, it should be focused “on the worst threats—bio, cyber, nuclear, autonomy risks…where there’s really an issue of, like, public safety and national security.”
Amodei is not an AI doomer by any means, but he didn’t think much of the idea advanced by Marc Andreessen that AI is “just math” and therefore can’t be dangerous.
“Isn’t your brain just math? When a neuron fires, it sums up the synapses. That's math too. You know, you shouldn’t be afraid of Hitler—it’s just math, right?” he said.
The conference also featured some demos and smaller discussion groups along with the headliners; we’re always working on new ways to make our events useful and fun.
Many thanks to our fantastic partners and co-hosts Max Child and James Wilsterman of Volley.
And thank you to our sponsors: HP, Oracle, Sapphire Ventures, Kleiner Perkins, Latham and Watkins, Crusoe, Mayfield, Samsung Next, Lambda, Menlo Ventures, Alexa Fund, Alkeon Capital Management, Obvious Ventures, Pear VC, Pyxis Search, Fenwick, and Deloitte.
Keep reading for a rundown on everything that happened at Cerebral Valley
A Wall on Pre-Training
Eric opened the day by asking Alexandr Wang of Scale AI the big question of the moment: have the improvements in LLMs hit a wall?
His answer was “yes and no,” but he then outlined his view that ever-more-compute to process ever-more-data was indeed reaching some limits.