Coding with an AI is an amplifier. It'll amplify your poor planning just as much as it amplifies your speed at getting some coding task done.
An unplanned problem becomes amplified by 10-100x worse than if you coded things slowly, by hand. That's when the AI starts driving you into Complexity Corner™ (LOL) to work around the lack of planning.
If all you're ever doing is using prompts like, `write a function to do...` or `write a class...` you're never going to run into the sorts of super fucking annoying problems that people using AI complain the most about.
It's soooooo tempting to just tell the AI to make complete implementations of things and say to yourself, "I'll clean that up later." You make so much progress so fast this way it's unreal! Then you hit Complexity Corner™ where the problem is beyond the (current) LLM's capabilities.
Coding with AI takes discipline! Not just knowledge and experience.
I agree, but would maybe argue that the level of instructions can be slightly higher level than "write function" or "write class" without ending up in a veritable cluster fuck, especially if guard rails are in place.
It's far more than planning. You have to "get to know your LLM" and it's quirks so you know how to write for it, and when they release new updates (cut off time or version), you have to do it again. Same for the agentic frameworks, when they change their system prompts and such.
It's a giant, non-deterministic, let's see what works good based on our vibes, mess of an approach right now. Even within the models, architecturally, there are recent results that indicate people are trying out weird things to see if they work, unclear if they are coming from first-principle intuition and hypothesis formation, or just throwing things at the wall to see what sticks
> Since vibe coding is so chaotic, rigorous planning is required, which not every developer had to do before.
I do believe the problem is different:
I think I am pretty good at planning, but I have a tendency (in particular for private projects) to work on things where the correctness requirements are very high.
While I can describe very exactly what the code is supposed to do, even small errors can make the code useless. If the foundations are not right, it will be complicated to detect errors in the higher levels (to solve this issue, I implement lots of test cases).
Also, I often have a very specific architecture for my code in mind. If the AI tries to do things differently, the code can easily become much less useful. In other words: concerning this point, exactly because I plan things carefully (as you claimed), the AI becomes much less useful if it "does its own thing" instead of following my plan.
Since vibe coding is so chaotic, rigorous planning is required, which not every developer had to do before.
You could simply "vibe" code yourself, roam, explore, fix.
Is that a fair description of your comment?