A week ago, analyst TD Cowen revealed that Microsoft had canceled leases "totalling a couple hundred MWs," with "at least two private data center operators across multiple US markets." The report also details how Microsoft "pulled back on converting negotiated and signed Statement[s] of Qualifications (SQQs)," which it added
I’m supposed to be able to take a model architecture from today, scale it up 100x and get an improvement. I can’t make the settings in Crysis 100x higher than they can go.
Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?
But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
Which part of diminished returns not offering as much profit did you not understand?
Current models give MS an extra 30% revenue. If they spend billions on a new model will customer pay even more? How much would you pay more for a marginally better AI?
But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
No, it’s not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn’t get a new level because you’re increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you’re trolling.
When you throw more hardware at them, they are supposed to develop new abilities.
I don’t think you understand how it works at all.
Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.
If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn’t any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and …) was that we’ll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they’d keep the GPUs for themselves to widen their moat.
But they’re not doing that, instead they’re scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don’t forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they’ll have to scale up massively to keep any advantage compared to the market. But they’re not doing that.
I’m supposed to be able to take a model architecture from today, scale it up 100x and get an improvement. I can’t make the settings in Crysis 100x higher than they can go.
Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?
You can make Crysis run at higher fps. You can add polygons. (remember ati clown feet?) You can add details to textures. https://research.nvidia.com/publication/2016-06_infinite-resolution-textures
But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
Which part of diminished returns not offering as much profit did you not understand?
Current models give MS an extra 30% revenue. If they spend billions on a new model will customer pay even more? How much would you pay more for a marginally better AI?
No, it’s not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn’t get a new level because you’re increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you’re trolling.
I don’t think you understand how it works at all.
Data is collected. Training is done on the data. Training is done on the trained data (deep seek). You now how a model. That model is a static software program. It requires 700 GB of ram to run (deep seek). Throwing more hardware at the model does nothing but give you a quicker response.
If everyone pays you to use your model, you have no reason to develop a new one. Like Skyrim.
My god.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn’t any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and …) was that we’ll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they’d keep the GPUs for themselves to widen their moat.
But they’re not doing that, instead they’re scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don’t forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they’ll have to scale up massively to keep any advantage compared to the market. But they’re not doing that.