Either way, it’ll get to a point where a few generations removed from cutting edge will be “good enough” for existing problem spaces to a degree models are further commoditized. Like any other tech (e.g. iPhones) Next-gen will enable new emergent use cases or continue to eke out diminishing advantages over competition.
The gap between the value added by GPT 3.5 and GPT 4 to my every day tasks is immense. Most people, outside of these circles, don't realize that such a large gap exists.
Personally it’s not one specific function but the answer I’m looking for [e.g, code that just works] is significantly better from GPT-4 imo. The better the output is, the less time I spend and the more likely I am to default to using the model in the first place.
It may be very hard to train cutting edge models if you only have consumer hardware