

But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
No, it’s not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn’t get a new level because you’re increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you’re trolling.
My god.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn’t any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and …) was that we’ll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they’d keep the GPUs for themselves to widen their moat.
But they’re not doing that, instead they’re scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don’t forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they’ll have to scale up massively to keep any advantage compared to the market. But they’re not doing that.