> Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult.
Mostly unrelated to the correctness of the article, but this feels like a bad argument. AFAIK, Anthropic/OpenAI/Google are not having issues with their weights being leaked (are they?). Why is it that Meta's model weights are?
I think it’s hard to say. We simply don’t know much from the outside. Microsoft has had some pretty bad security lapses, for example around guarding access to Windows source code. I don’t think we’ve seen a bad security break-in at Google in quite a few years? It would surprise me if Anthropic and OpenAI had good security since they’re pretty new, and fast-growing startups have a lot of organizational challenges.
It seems safe to assume that not all the companies doing leading-edge LLM’s have good security and that the industry as a whole isn’t set up to keep secrets for long. Things aren’t locked down to the level of classified research. And it sounds like Zuckerberg doesn’t want to play the game that way.
At the state level, China has independent AI research efforts and they’re going to figure it out. It’s largely a matter of timing, which could matter a lot.
There’s still an argument to be made against making proliferation too easy. Just because states have powerful weapons doesn’t mean you want them in the hands of people on the street.
We have nationals/citizens of every major US adversary working in those companies with looser security practice than security at local warehouse. Security check before hiring is a joke (mostly checks that resume checks out), laptops can be taken home and internal communication are not segmented on need to know basis. Essentially if China wants weights or source code, it will have hundreds of people to choose from who can provide it.
>AFAIK, Anthropic/OpenAI/Google are not having issues with their weights being leaked. Why is it that Meta's model weights are?
The main threat actors there would be powerful nation-states, in which case they'd be unlikely to leak what they've taken.
It is a bad argument though, because one day possession of AI models (and associated resources) might confer great and dangerous power, and we can't just throw up our hands and say "welp, no point trying to protect this, might as well let everyone have it". I don't think that'll happen anytime soon, but I am personally somewhat in the AI doomer camp.
Mostly unrelated to the correctness of the article, but this feels like a bad argument. AFAIK, Anthropic/OpenAI/Google are not having issues with their weights being leaked (are they?). Why is it that Meta's model weights are?