Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think "open weights" is giving far too much providence to the idea that it means that how they work or have been trained is easily inspectable.

We can barely comprehend binary firmware blobs, it's an area of active research to even figure out how LLMs are working.



Agreed. I am more excited about completely open source models like how OlMoe does.

Atleast then things could be audited or if I as a nation lets say am worried about that they might make my software more vulnerable or something then I as a nation or any corporation as well really could also pay to audit or independently audit as well.

I hope that things like glm 4.6 or any AI model could be released open source. There was an AI model recently which completley dropped open source and its whole data was like 70Trillion or something and it became the largest open source model iirc.


A backdoor would still be highly auditable in a number of ways even if inspecting the weights isn't viable.

There's no possibility for obfuscation or remote execution like other attack vectors




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: