Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was responding to the initial question actually (https://news.ycombinator.com/item?id=38113521). I appreciate the insight though - looking forward to checking out the Gibson book.

Still, I'm not sure that I see the 'AI jumping out of the box scenario' any time soon (or ever). Chat apps finding missile codes, or convincing your smart toaster to jump in your bubble bath while you're inside seem only as real as a man in a box with intentions is (and unfortunately humans can kill from inside and out of boxes).

I'm definitely concerned about the implications to social welfare, the social bias involved in systems that make big decisions about an individuals freedom, etc, but these leaps from super impressive automated text analysis to humanity doomsday scenario seem like fear mongering to me mostly because these are scenarios that already exist today (along with massive problems in general social welfare).

The scenarios that don't exist (like Nick Bostrom's "An objective function that turns the world into a paper clip factory in "Superintelligence: Paths, Dangers, Strategies") strike me as easy to fix by just standing by the power outlet so things don't get out of hand. There are a lot of risks that don't exist yet. Alien contact is one of them - never happened, but it could happen, and if it does it could wipe us out - so be afraid and do something noble for the cause. This to me feels like a very rational response to what is essentially a 'magical' prior. We're scared now because we 'feel' close to General AI, but we really have no way of quantifying how close we are to it, and how dangerous (if at all) it would actually be. I'm definitely open to being wrong, but its hard not to agree with LeCun on a level.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: