Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is also the distinct possibility that the wording is literally true in sense they don't train their model on your data ("train" can be interpreted as having a very specific meaning...), but they nevertheless also undertake some sort of monitoring of the outputs a model produces (which could obviously entail privacy leaks, especially if it's using RAG on your personal files).

Seems quite sensible for people not to trust that they fully understand the small print, since (i) they probably don't and (ii) the one thing most AI companies have made clear is that they think they should be able to use whatever material they like however they like, regardless of whether the creator of that material gives them their blessing or not.



This is absolutely part of the problem.

Even if you have a deep technical understanding of how this stuff works and how these tools are built, you can still have very legitimate questions and doubts about how your data is being used beyond pure model training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: