> we won’t have that until we come up with a better way to fund these things.
Isn't this already happening with LLaMA and Dalai etc.? Already now you can run Whisper yourself. And you can run a model almost as powerful as gpt-3.5-turbo. So I can't see why it's out of bounds that we'll be able to host a model as powerful as gpt4.0 on our own (highly specced) Mac Studio M3s, or whatever it may be.
Isn't this already happening with LLaMA and Dalai etc.? Already now you can run Whisper yourself. And you can run a model almost as powerful as gpt-3.5-turbo. So I can't see why it's out of bounds that we'll be able to host a model as powerful as gpt4.0 on our own (highly specced) Mac Studio M3s, or whatever it may be.