> reportedly in the low single-digit billions at best
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
The optimistic view is that Anthropic is one of about four labs in the world capable of generating truly state-of-the-art models. Also, Claude Code is arguably the best tool in its category at the moment. They have the developer market locked in.
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
The missing piece here is Anthropic is not playing the same game. Consumer branding and larger user base are concerns for OpenAI vs Google. Personal chatbot/companion/ search isn’t their focus.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
> Cursor had won the developer market from the previous winner copilot
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Fun fact! You can use the word "just" in front of anything to make is sound trivial. Isn't planet Earth just one of eight planets in the Solar System? What's the big deal? Isn't Google just a website? Take out the word "just" and think on it a little. In this case, maybe there's something to that?
Most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
Individual and startup devs yes. Enterprise devs, less so.
The latter are locked in to whatever vendor(s) their corporate entity has subscribed to. In a perverse twist, this gives the approved[tm] vendors an incentive to add backend integrations to multiple different providers so that their actual end-users can - at least in theory - choose which models to use for their work.
most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
almost every single AI doomer i listen to hasnt updated any of their priors in the last 2 years. these people are completely unaware of what is actually happening at the frontier or how much progress has been made.
You haven’t actually looked at their fundamentals. They’re profitable serving current models including training costs and are only losing money on future RD training, but if you project future revenue growth on future generations of models you get a clear path to profitability.
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
> They’re profitable serving current models including training costs
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
The best I kind is this tech crunch article, which appears to be referencing an article from the information that is pay walled.
> The Information reports that Anthropic expects to generate as much as $70 billion in revenue and $17 billion in cash flow in 2028. The growth projections are fueled by rapid adoption of Anthropic’s business products, a person with knowledge of the company’s financials said.
> That said, the company expects its gross profit margin — which measures a company’s profitability after accounting for direct costs associated with producing goods and services — to reach 50% this year and 77% in 2028, up from negative 94% last year, per The Information.
So assuming that the gross margin is GAAP (which it probably isn't), then this would suggest that the costs of training are covered by inference sales this year (which is definitely good).
However, I'm still a little sceptical around this as the cost to train new models is going up super-linearly (apparently) which means that the revenue from inference needs to also go up along side this.
Interesting to think about though, thanks for the source!
1. Sounds like exactly when early investors and insiders would want to cash in and when retail investors who “have heard of the company and like the product” will buy without a lot of financial analysis.
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?