Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn’t “unfair”, but you are intentionally underselling it.

If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.

Edit: lol this forum :)





> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right

I AM very impressed, and I DO use it and enjoy the results.

The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.

Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.

But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.

So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.

I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.

I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.

I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.


The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.

In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.

I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.


I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.

So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.


The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.

Right— that’s bitten me ‘whipping up’ prototypes. My assumption about the way the LLM would handle done minutiae ends up being wrong and finding out why something isn’t working ends up taking more time than doing it right the first time by hand. The worst part about that is you can’t even factor it in to your already inaccurate work time estimates because it could strike anywhere — including things you’d never mess up yourself.

> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Or your job isn't what AI is good at?

AI seems really good at greenfield projects in well known languages or adding features.

It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.


> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

This is precisely my experience.

Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.

Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.


> It's been pretty awful, IME, at working with less well-known languages

Well, there’s your problem. You should have selected React while you had the chance.


The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.

When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.


Many people also program and have no idea what a giant codebase looks like.

I know I don't. I have never been paid to write anything beyond a short script.

I actually can't even picture what a professional software engineer actually works on day to day.

From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.

It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.


I work on a stock market trading system in a big bank, in Hong Kong.

The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.

Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.

It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)

Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.

Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(


There’s a reason why it’s so much better at writing JavaScript than HFT C++.

The latter codebase doesn’t tend to be in github repos as much.


This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?

> If you haven’t had a mind blown moment with AI yet...

Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.


You whole comment reads like someone who is a victim of hype.

LLMs are great in their own way, but they're not a panacea.

You may recall that magic is way to trick people into believing things that are not true. The mythical form of magic doesn't exist.


I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.

AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.

There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.


>AI is better than you at what you aren’t very good at.

Yes, this is better phrased.


> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.

> Edit: lol this forum :)

Indeed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: