As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
Exactly it’s the non-obvious errors that are easy to miss—doubly so if you are just scanning the code. Those errors can create very hard to find bugs.
So between the debugging and many times you need to reprompt and redo (if you bother at all, but then that adds debugging time) is any time actually saved?
I think the dust hasn’t settled yet because no one has shipped mostly AI generated code for a non-trivial application. They couldn’t have with its current state. So it’s still unknown whether building on incredibly shaky ground will actually work in real life (I personally doubt it).
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
Those slops already existed, but AI scales them by an order of magnitude.
I guess the same can be said of any technology, but AI is just a more powerful tool overall. Using languages as an example - lets say duck typing allowed a 10% productivity boost, but also introduced 5% more mistakes/problems. AI (claims to) allow a 10x productivity boost, but also ~10x mistakes/problems.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
I wonder if there’s a way to measure the cost of such code and associate it with the individuals incurring it. Unless this shows on reports, managers will continue believing LLMs are magic time saving machines writing perfect code.
As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.
I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.
To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.
Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.
The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.
> I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line.
Most people do not dream of working most white collar jobs. Many people dream of meaningful physical labor. And many people who worked in mines did not dream of being told to learn to code.
The important piece here is that many people want to contribute to something intellectually, and a huge pathway for that is at risk of being significantly eroded. Permanently.
Your point stands that many people like physical labor. Whether they want to artisanally craft something, or desire being outside/doing physical or even menial labor more than sitting in an office. True, but that doesn't solve the above issue, just like it didn't in reverse. Telling miners to learn to code was... not great. And from my perspective neither is outsourcing our thinking en masse to AI.
I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).
As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.
Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.
In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?
I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
The "hallucination problem" can't be solved, it's intrinsic to how stochastic text and image generators work. It's not a bug to be fixed, it's not some leak in the pipe somewhere, it is the whole pipe.
> there's still going to be a lot of toxic waste generated.
And how are LLMs going to get better as the quality of the training data nosedives because of this? Model collapse is a thing. You can easily see a scenario how they'll never be better than they are now.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be
I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.
I suspect unless we have real AGI we won't have human-level coding from AIs.
Pretty much. Someone on our team put out a code review for some new feature and then bounced for a 2 week vacation. One of our junior engineers approved it. Despite the fact that it was in a section of dead code that wasn’t supposed to even be enabled yet, it managed to break our test environment. Took senior engineers a day to figure out how that was even possible before reverting. We had another couple engineers take a look to see what needs to be done to fix the bug. All of them came away with the conclusion that it was 1,000 lines of pure AI-generated slop with no redeemable value. Trying to fix it would take more work than just re-implenting from scratch.
pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
so your process seems inadequate to me, agents or not.
also, was it tagged as generated? that seems like an obvious safety feature. As a junior, I might be thinking: 'my senior colleague sure knows lots of this stuff', but all it would take to dispel my illusion is an agent tag on the PR.
> pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
Yeah that’s what I think we need to enforce. To answer your question, it was not tagged as AI generated. Frankly, I think we should ban AI-generated code outright, though labeling it as such would be a good compromise.
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
Exactly. I think it’s pretty clear that software engineering is an “intelligence complete” problem. If you can automatically solve SWE than you can automatically solve pretty much all knowledge work.
The difference is that unlike SWEs, the people doing all that bullshit work are much better at networking, so they will (collectively) find a reason why they shouldn't be replaced with AI and push it through.
SWEs could do so as well, if only we were unionized.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.