Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from




That is the problem.

AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.

I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.

Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.


> it just makes software engineering into bothering supervisor job.

I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.


But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.

... except they didn't. In fact most AI tech were good for a nice demo and little else.

In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.

AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.


Luckily for us, technologies like SQL made similar promises (for more limited domains) and C suites couldn't be bothered to learn that stuff either.

Ultimately they are mostly just clueless, so we will either end up with legions of way shittier companies than we have today (because we let them get away with offloading a bunch of work to tools they rms int understand and accepting low quality output) or we will eventually realize the continued importance of human expertise.


There are plenty of good tasks left, but they're often one-off/internal tooling.

Last one at work: "Hey, here are the symptoms for a bug, they appeared in <release XYZ> - go figure out the CL range and which 10 CLs I should inspect first to see if they're the cause"

(Well suited to AI, because worst case I've looked at 10 CLs in vain, and best case it saved me from manually scanning through several 1000 CLs - the EV is net positive)

It works for code generation as well, but not in a "just do my job" way, more in a "find which haystack the needle is in, and what the rough shape of the new needle is". Blind vibecoding is a non-starter. But... it's a non-starter for greenfields too, it's just that the FO of FAFO is a bit more delayed.


My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'

But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.

Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'


> 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'

I see we've met the same product people :)


I had a VP of a revenue cycle team tell me that his expectation was that they could fling their spreadsheets and Word docs on how to do calculations at an AI powered vendor, and AI would be able to (and I direct quote) "just figure it all out."

That's when I realized how far down the rabbit hole marketing to non-technical folks on this was.


I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.

Yeah but Google won’t expect you to use AI tools developed outside Google and trained on primarily OSS code. It would expect you to use the Google internal AI tools trained on google3, no?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: