👐 OpenAI Introduced GPT-4.5—Their Latest Model Without Reasoning
OpenAI CEO Sam Altman notes that GPT-4.5 isn't a reasoning model, so it won't crush benchmarks. (By the way, he wasn't at the presentation himself.)
In GPT-4.5, the developers focus on making the model more "human-like" instead of aiming for a "superhuman" AI. The model handles questions better, gives more natural answers, hallucinates less, and understands what users want.
Other Features:
➡️ "It's a giant, expensive model," says Altman. OpenAI had trained the GPT-4.5 across several data centers at once. In the end, OpenAI ran out of free GPUs.
➡️ The model knows a lot more—thanks to bigger pretraining data. In exact sciences, GPT-4.5 beats GPT-4o nearly 1.5 times and gets close to the "reasoning" o3-mini. But in coding and math, despite progress, it's still far from o3-mini.
➡️ The model hallucinates much less compared to GPT-4o and o3-mini.
➡️ Pro subscribers ($200/month) already have access, while Plus subscribers ($20/month) wait until next week—all because of the model's size.
📈 What Does This Release Mean?
Scaling data helps the chatbot understand reality better, but reasoning models still lead by a lot in tough tasks. We see this in the latest versions of Claude, Grok, Gemini, or DeepSeek.
GPT-4.5 is OpenAI's last "ordinary" model. GPT-5 will be a hybrid, able to reason when needed. But reasoning isn't the magic fix for everything.
"Scaling pretraining and scaling thinking are two different ways to improve. They work together, not against each other," says OpenAI researcher Noam Brown. We recently covered how his team started building reasoning models.



