Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is that even something they keep on hand? Or would WANT to keep on hand? I figured they're basically sending a crawler to go nuts reading things and discard the data once they've trained on it.

If that included, e.g. reading all of Github for code, I wouldn't expect them to host an entire separate read-only copy of Github because they trained on it and say "this is part of our open source model"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: