Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And really the reason that it would be like that is that the models don't learn, per se, within their lifetime.

I'm told that each model is cashflow positive over its lifetime, which suggests that if the companies could just stop training new models the money would come raining down.

If they have to keep training new models though to keep pace with the changes in the world though then token costs would be only maybe 30% electricity and 70% model depreciation -- i.e. the costs of training the next generation of model so that model users don't become stranded 10 years in the past.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: