Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Current Gen LLMs are not breaching the moat yet.




Yeah they are. llama.cpp has had good performance on cpu, amd, and apple metal for at least a year now.

Thw hardware is not the issue. It's the model architectures leading to cascading errors



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: