HotView The AI Industry's Problem Is No Longer a 'Lack of Models,' but a 'Lack of Compute'

The AI Industry's Problem Is No Longer a 'Lack of Models,' but a 'Lack of Compute'

4.webp

@源深路炒家: FT reports that Anthropic has partnered with SpaceX to lease over 300MW of compute resources from the latter's Colossus 1 data center to run the Claude model. This data center is equipped with hundreds of thousands of Nvidia GPUs and was originally intended primarily for xAI and Grok.

There are actually several key takeaways here.

First, the AI industry's problem is no longer a "lack of models," but a "lack of compute."

This year, Anthropic has successively signed massive compute agreements with:

Amazon
Google
Microsoft
Nvidia
And now SpaceX

Even including:

A 5GW-level partnership with Amazon
A similar-scale partnership with Google/Broadcom
$30 billion-level compute support from Microsoft Azure

Essentially, the core competitive advantage has now become:
"Who can continuously secure enough GPUs and electricity."

Second, Musk is actually gradually accepting that:
"Selling compute is more profitable than training your own models."

A crucial sentence in the article is:

Grok's usage significantly lags behind Claude and ChatGPT, so SpaceX has started leasing "surplus compute" to competitors.

This is actually very similar to the early logic of cloud computing.

Amazon initially also had excess internal IT capacity, and ultimately realized:
"Renting out infrastructure" ironically became AWS.

A similar trend is now emerging in the AI industry:

In the future, the truly profitable entities may not only be model companies, but could also include:

GPU resource providers
Data centers
Electricity
Optical communication
Network infrastructure

Third, the AI industry is increasingly resembling a "super capital-intensive industry."

In the past, internet startups:
Could build products with a few dozen people and a few servers.

Now, top-tier AI companies require:

Hundreds of MW
Hundreds of thousands of GPUs
Tens of billions of dollars in CapEx
This is starting to approach traditional energy, railway, and telecom operator industries.

The article even mentions:

SpaceX plans to build "orbital AI compute" in the future, meaning putting data centers in space.

This sounds like science fiction, but the core issue behind it is very realistic:

Earth's electricity, land, and cooling capabilities are increasingly struggling to keep pace with the speed of AI expansion.

Fourth, this also shows that the market previously underestimated the true demand for Claude.

Recently, many have been complaining about Claude's severe rate limiting and declining user experience, but fundamentally, it's not that no one is using the product; rather, demand is growing too fast, and compute is insufficient.

Someone in the FT comments section even noted:
Previously, you could upload 5 files at a time, but now you can only upload 1.

This actually indirectly proves that:
The demand for high-quality models is growing explosively.

So now, the entire AI industry is increasingly becoming:
"Whoever can secure more GPUs, electricity, and data centers will be the one who can continue to expand."

Comments (0)

No comments yet, be the first!
微信
微信扫一扫关注我们

微信扫一扫关注我们