Anthropic in the Arena...San Francisco Tops AI Investments...Felicis in the VC Directory
Plus, funding rounds from DeepL, Farcaster, H, Scale AI & Suno
The Main Item
Anthropic in the Arena
We all saw how OpenAI lost control of the narrative after its successful GPT-4o launch last week, between the Superalignment team dissolving and Scarlett Johansson striking back against the assistant’s “Sky” voice.
Even as the company pushes ahead, more people are questioning if Sam Altman really is the man we want to take us to our AGI future.
So it only makes sense for a competitor like Anthropic to take this moment and highlight their dedication to AI safety. After all, what better way to look sharp against a competitor that’s being criticized for playing fast and loose with the rules than to show how your AI product can be trained to become more upstanding?
This week the company shared their progress into shining light on the “black box” of how large-language models process inputs and spit out answers based on their training data.
In a blog post titled “Mapping the Mind of a Large Language Model” published Tuesday, Anthropic’s research team revealed that they had identified over 10 million “features,” or information data points, within its Claude 3 Sonnet model. They then used a technique called “dictionary learning” to find patterns in how the different mathematical units inside the model were activated when prompted by certain phrases or topics. For example, they found that the phrase “Golden Gate Bridge” can be mapped closely to other features, like Alcatraz Island or Governor Gavin Newsom, showing that the model processes information much like a human would around concepts of similarity.
The mapping they’ve completed is just a small fraction of the billions of features that the model contains, but it’s an important step in making these models “safer” for human use. If certain features can be turned on or off, this could prevent the model from being deceptive or biased.
It’s also a great posture to take when your biggest competitor has lost its internal team dedicated to AI safety and is under fire for exceptionally strict non-disparagement clauses. If Altman’s personal ethics and character are being questioned by some of his employees privately — who had been banned from speaking publicly about it — dedicating your resources to curb AI’s more malicious tendencies is quite the value proposition. (OpenAI has said it will waive its restrictive non-disparagement for many former employees.)
Yet recent hiring moves show that Anthropic does not want to be left out of the race to find the killer AI product. We mentioned last week that the company had snatched up Mike Krieger, an instagram cofounder, as its new Chief Product Officer, and this week it was reported that Airbnb veteran Krishna Rao is coming on board as CFO to help the company accelerate revenue. Being the more safety-focused AI research lab is great, but it doesn’t always pay the bills.
Meta is looking to one-up its AI game too: the tech giant announced its new product advisory council this week, made up of Silicon Valley heavyweights including Nat Friedman and Stripe’s Patrick Collison, to advise the company’s management team, among other things, on how to better utilize AI in its hardware and software products.
It’s a savvy play for Meta, a company that was seen as burning too much energy on the metaverse, and now has to play in the big leagues against OpenAI and Google. But unlike Anthropic, Meta doesn’t brand itself as the premiere “Safe AI” company. With each of its well-capitalized competitors breaking into the product game, Anthropic’s future will rest on its ability to acquire enough revenue to counteract its less than ideal gross margins.
In this next phase of AI development, product will be the determining factor as to who comes out on top of this wave, and it’s not clear any one chatbot has been able to really set itself apart from the competition. It’s a good thing that Anthropic, the company that touts its values as its differentiator, is putting its foot on the gas.