← Back to context

Comment by nutrientharvest

12 days ago

That might be part of it, but I think the bigger factor is cost optimization. OpenAI in particular keeps replacing their models with with versions that are much faster (and therefore cheaper to run) which are supposed to be of equivalent quality but aren't really. GPT-4 -> GPT-4-Turbo -> GPT-4o have all been big upgrades to cost and latency but arguably downgrades to "intelligence" (or whatever you want to call it)