Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There are also times where it isn't.

Should this have the "Significantly so" qualifier as well?



I'm not sure. I think it's asymmetric: high upside potential, but low downside.

Because when the AI isn't cutting it, you always have the option to pull the plug and just do it manually. So the downside is bounded. In that way it's similar the Mitch Hedberg joke: "I like an escalator, because an escalator can never break. It can only become stairs."

The absolute worse-case scenario is situations where you think the AI is going to figure it out, so keep prompting it, far past the time when you should've changed your approach or gfiven up and done it manually.


This is so far from an absolute worst-case scenario.

You could have a codebase subtly broken on so many levels that you cannot fix it without starting from scratch - losing months.

You could slowly lose your ability to think and judge.


Ha, great answer! Of course there are a lot of nuances to that but I don't want to get into beating dead horses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: