Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Want to also add that the model doesn’t know how to respond in a user-> assistant style conversation after it’s pretraining, and it’s a pure text predictor (look at the open source base models)

There’s also what is being called mid-training where the model is trained on high(er) quality traces and acts as a bridge between pre and post training





just to go off of this there is also stochastic random overfit retraining process (SRORP). Idea behind SRORP is to avoid overfitting. SRORP will take data points from -any- aspect of the past process with replacment and create usually 3-9 bootstrap models randomly. The median is then taken from all model weights to wipe out outliers. This SRORP polishing -if done carefully- is usually good for a 3-4% gain in all benchmarks



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: