Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly my experience too. I'm now using AI like 25% of the time or less. I always get to a point where I see that using agentic coding is making me not want to actually think, there's no way anyone can convince me that that is a superior approach, because every time I took days off the agents to actually think, I came up with a far superior architecture and code that even rendered much of what the agents were hammering away at moot.

Agentic coding is like a drug or slot machine, it slowly draws you in with the implicit promise of getting much for little. The only ways it is useful to me now is for very focused tasks where I have spent a lot of time defining the architecture down to the last detail, and the agents are used to fill in the blanks, as it were.

I also think I could write a better agent, and as to why the bog corps have not done so is baffling to me. Just event getting the current agents to obey the guidelines in the agent .md files is a struggle. They forget pretty much everything two prompts down the line. Why can't the CLI systemically prompt them to check every time, etc.?

Something tell me the future is about domain-aware agents that help users to wring better performance out of the models, based on some domain-specific deterministic guardrails.



I've had experiences like this before but if that's the ONLY experience you've had, or if you have that experience 75% of the way, I think you're doing something wrong. Or perhaps you're just in a very different coding domain than I am (web dev, HTML/CSS/JS) where the AI happens to suck.

The biggest mistakes imo are:

1. Underplanning. Trying to do huge projects in one go, rather than breaking them down into small projects, and breaking those small projects down into well thought out plans.

2. Too much of a focus on prompting rather than context. Prompt engineering is obsessing with the perfect way to say or phrase something. Whereas context engineering putting relevant information into the LLM's working memory, which requires you to go out and gather that info (or use the LLM to get it).


I've had my share of good and bad experiences, one section of an existing project more than 90% ai created. How you say things is equally important to the context you provide, partly because the agents will start trying to decide what is and is not good context, which they are unreliable in doing, even after you give them the limited context and tell them not to edit other files or bring in more context. For example, if you use a lot of colloquial phrases, you activate that area of the network, taking away from using other parts (MoE activation, also lower level too)

They are not good readers (see research results around context collapse and context poisoning)


If we take Elon Musks approach to challenging engineering problems, which in this exact order is:

1. Question every requirement

2. Delete any part of the process you can

3. Simplify and optimize

4. Accelerate cycle time

5. Automate

In my experience coding agents at the moment are really good at 4. and 5. and they absolutely suck at 1. and 2.

3, they are okay at if prompted well.

Humans are okay at 1. and 2. IF they understand the system well and critically question requirements. With LLM generated codebases this system understanding is often missing. So you can't even start with 1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: