Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what we should and what we are forced to do are very different things. if I can get a machine to do the stuff I hate dealing with, I'll take it every time.


who's going to be held accountable when the boilerplate fails? the AI?


The buck stops with the engineer, always. AI or no AI.


I've seen juniors send AI code for review, when I comment on weird things within it, it's just "I don't know, the AI did that"


Oh, me too. And I reject them as the same as if they had copied code from Stack Overflow they can't explain.


no, I'm testing it the same way I test my own code!


yolo merging into prod on a friday afternoon?


It's like the xkcd on automation

https://xkcd.com/1205/

After a while, it just make sense to redesign the boilerplate and build some abstraction instead. Duplicated logic and data is hard to change and fix. The frustration is a clear signal to take a step back and take an holistic view of the system.


And this is a great example of something I rarely see LLMs doing. I think we're approaching a point where we will use LLMs to manage code the way we use React to manage the DOM. You need an update to a feature? The LLM will just recode it wholesale. All of the problems we have in software development will dissolve in mountains of disposable code. I could see enterprise systems being replaced hourly for security reasons. Less chance of abusing a vulnerability if it only exists for an hour to find and exploit. Since the popularity of LLMs proves that as a society we've stopped caring about quality, I have a hard time seeing any other future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: