Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course not. I'm leaning towards Yudkowsky's recommendation, which is to put a pause on all of it.


I have a bridge to sell Yudkowsky and everyone else who believes that an "AI pause" is actually something that can happen.

Global treaties pretty much all occur with implied threats of violence, and we don't have enough guns to force this across India, China, the middle east, the EU, and the US. Major AI development happens in all of those places.


You're reading this wrong. Yudkowsky isn't saying that this will happen; in fact, it's apparent to him as much as to everyone else how unlikely is that.

Yudkowsky's point is that this is the minimum and least difficult thing we need to do. LeCun and others on both sides of the "AI regulation" argument are busy arguing the color of the carpet, while the whole house is on fire.


I think we all also fundamentally disagree about whether the house is on fire at all. History is littered with examples of Luddites yelling about how the bottom of a technological S-curve is actually an exponential and how we Must Do Something to prevent a catastrophe.

With possibly one exception (biological weapons, research on which seems to have few positive externalities) they have always been wrong. I did not mean nuclear weapons - we are seeing significant negative societal fallout from failing to invest in nuclear technology. So no, the house is almost certainly not on fire.


> History is littered with examples of Luddites yelling about how the bottom of a technological S-curve is actually an exponential and how we Must Do Something to prevent a catastrophe.

Can't think of any such examples, do you have some?

Of the two that come to my mind:

- The only thing the Luddites were yelling about is having their livelihoods pulled out from under them by greedy factory owners who aggressively applied automation instead of trying to soften the blow. They weren't against the technological progress; the infamous destruction of looms wasn't a protest against progress, but a protest against treatment of laboring class.

- The "Limits to Growth" crowd, I don't think they were wrong at all. Their predictions didn't materialize on schedule because we hit a couple unexpected technological miracles, the Haber–Bosch process being the most prominent one, but the underlying reasoning looks sound, and there is no reason to expect we'll keep lucking into more technological miracles.


Lots of people recommend a pause, Yudkowsky explicitly argues that a pause is entirely inadequate and recommends an extreme anti-AI-research global crusade, so its kind of odd to point to him the way you have as a pause-recommender.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: