It depends. I've been working on a series of large, gnarly refactors at work, and the process has involved writing a (fairly long), hand-crafted spec/policy document. The big advantage of Opus has been that the spec is now machine-executable -- I repeatedly fed it into the LLM and see what it did on some test cases. That sped up experimentation and prototyping tremendously, and it also found a lot of ambiguities in the policy document that were helpful to address.
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
For me it‘s the opposite. I do have a good feeling what I want to achieve, but translating this into and testing program code has always been causing me outright physical pain (and in case of C++ I really hate it). I‘ve been programming since age 10. Almost 40 years. And it feels like liberation.
It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive
Indeed. I still use AI for my side projects, but strictly limit to discussion only, no code. Otherwise what is the point? The good thing about programming is, unlike playing chess, there is no real "win/lose" in the scenario so I won't feel discouraged even if AI can do all the work by itself.
Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.
Even the discussion side has been pretty meh in my mind. I was looking into a bug in a codebase filled with Claude output and for funsies decided to ask Claude about it. It basically generated a "This thing here could be a problem but there is manual validation for it" response, and when I looked, that manual validation were nowhere to be found.
There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.
I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.
I don't discuss actual code with ChatGPT, just concepts. Like "if I have an issue and my algo looks like this, how can I debug it effectively in gdb?", or "how do I reduce lock contention if I have to satisfy A/B/...".
Maybe it's just because my side projects are fairly elementary.
And I agree that AI is pretty good at code review, especially if the code contains complex business logic.
something in the back of my head tells me that automating (partial) intelligence feels different than automating a small to medium scope task, maybe i'm wrong though
The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.
If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.
Dario Amodei claimed "AI will replace 90% of developers within 6 months" about a year ago. Still they are just loosing money and will probably will be forever while just producing more slop code that needs even more devs to fix it.
Good job AI fanboys and girls. You will be remembered when this fake hype is over.
I'm more of a doomsayer than a fan boy. But I think it's more like "AI will replace 50% of your juniors and 25% of your seniors and perhaps 50% of your do-nothing middle managers", And that's a fairly large number anyway.
100% in the doomers camp now, wish I could be as optimistic as these people who think AI is all hype but the last few weeks it's starting to finally be more productive to use these tools and I feel like this will be a short little window where the stuff I'm typing in and the review of whats coming out is still worth my salary.
I don't really see why anywhere near the number of great jobs this industry has had will be justifiable in a year. The only comfort is all the other industries will be facing the same issue so accomodations will have to be made.
I wouldn't be surprised if it is only software and creative jobs that die. Whilst I still find it expensive to buy a house, get food, and the grunt work will still need labor.
What that means for society where there are extremely rich people who owns resources and capital, and everyone else is only valued for their dexterity and physical labor (vs skills) I can only guess.
I do think the AI labs have potentially unleashed a society changing technology that ironically penalizes meritocracy and/or intelligence by making it less scarce. The jobs left will be the ones people avoided for a reason (health, risk, etc)
Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.
Good job boys and girls. You will be remembered.