Hacker Newsnew | past | comments | ask | show | jobs | submit | Chance-Device's commentslogin

It does though. That’s a separate issue from the inevitable layoffs and any bugs introduced along the way, but he’s not wrong.

Speak for yourself. I think he's extremely wrong

I think if all you care about is the outcome then sure, you might enjoy AI coding more

If you enjoy the problem solving process (and care about quality) then doing it by hand is way, way more enjoyable


If you don’t care about outcome then all you’re doing is playing a video game.

Sure, but the headline wasn't "Google CEO says ‘vibe coding’ made software development ‘so much less like a video game.’" In fact since many people think video games are enjoyable, making software development less gamelike might make it less enjoyable.

(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)


If writing code by hand is like playing a videogame, then vibe coding is like playing a slot machine

Argue about the value of video games all you like, I would still place them above slot machines any day


I think we’re mixing our metaphors here, what I mean is at the end of the day you write code to get some result you actually care about, or that matters for some material reason. Work is labor at the end of the day. If you don’t care about that outcome or optimizing for it, then you may as well play a video game or code golf or something. What you now want is a hobby.

> If you don’t care about that outcome or optimizing for it,

I do care about the outcome, which is why the thought of using AI to generate it makes me want to gouge my eyes out

In my view using AI means not caring about the outcome because AI produces garbage. In order to be happy with garbage you have to not care


It depends on how you use it. I was running 15 agents at once, 12 hours a day for a month straight because it was more optimal to add more, and that wasn't very enjoyable. Now I'm back to writing code the enjoyable way, with minor LLM assistance here and there.

This is a weird article. How many times in your career have you been handed a grossly under-specified feature and had to muddle your way through, asking relevant people along the way and still being told at the end that it’s wrong?

This is exactly the same thing but for AIs. The user might think that the AI got it wrong, except the spec was under-specified and it had to make choices to fill in the gaps, just like a human would.

It’s all well and good if you don’t actually know what you want and you’re using the AI to explore possibilities, but if you already have a firm idea of what you want, just tell it in detail.

Maybe the article is actually about bad specs? It does seem to venture into that territory, but that isn’t the main thrust.

Overall I think this is just a part of the cottage industry that’s sprung up around agile, and an argument for that industry to stay relevant in the age of AI coding, without being well supported by anything.


I sometimes wonder how many comments here are driving a pro AI narrative. This very much seems like one of those:

The agent here is:

Look on HN for AI skeptical posts. Then write a comment that highlights how the human got it wrong. And command your other AI agents to up vote that reply.


It has nothing to do with AI, the article is just plain wrong. You have to be either extremely dumb, extremely inexperienced or only working solo to not understand this.


A lot of negativity towards this and OpenAI in general. While skepticism is always good I wonder if this has crossed the line from reasoned into socially reinforced dogpiling.

My own experience with GPT 5 thinking and its predecessor o3, both of which I used a lot, is that they were super difficult to work with on technical tasks outside of software. They often wrote extremely dense, jargon filled responses that often contained fairly serious mistakes. As always the problem was/is that the mistakes were peppered in with some pretty good assistance and knowledge and its difficult to tell what’s what until you actually try implementing or simulating what is being discussed, and find it doesn’t work, sometimes for fundamental reasons that you would think the model would have told you about. And of course once you pointed these flaws out to the model, it would then explain the issues to you as if it had just discovered these things itself and was educating you about them. Infuriating.

One major problem I see is the RLHF seems to have shaped the responses so they only give the appearance of being correct to a reasonable reader. They use a lot of social signalling that we associate with competence and knowledgeability, and usually the replies are quite self consistent. That is they pass the test of looking to a regular person like a correct response. They just happen not to be. The model has become expert at fooling humans into believing what it’s saying rather than saying things that are functionally correct, because the RLHF didn’t rely on testing anything those replies suggested, it only evaluated what they looked like.

However, even with these negative experiences, these models are amazing. They enable things that you would simply not be able to get done otherwise, they just come with their own set of problems. And humans being humans, we overlook the good and go straight to the bad. I welcome any improvements to these models made today and I hope OpenAI are able to improve these shortcomings in the future.


I feel the same - a lot of negativity in these comments . At the same time, openai is following in the footsteps of previous American tech companies of making themselves indispensable to the extent that life becomes difficult without them, at which point they are too big to control.

These comments seem to be almost a involuntary reaction where people are trying to resist its influence.


precisely: o3 and gpt5t are great models, super smart and helpful for many things; but they love to talk in this ridiculously overcomplex, insanely terse, handwavy way. when it gets things right, it's awesome. when it confidently gets things wrong, it's infuriating.


I was expecting something about how to protect your consciousness from (or during) AI use, but I got a short 200 word note rehashing common sentiments about AI. I guess it’s not wrong, it’s just not very interesting.


Yeah if found it slightly ironic that an argument against using AI is made as an empty social media-style post. Ironically AI could have written a better one.


it'd be worse, just longer


It is very interesting because it tackles things people love to forget when using AI. A little over a decade ago it was scandal on how big tech companies are using peoples data, now people give it knowingly to them via all kinds of bullshit apps. So you have to repeat the obvious over and over, and even then it wont click for many.


So wild to think Cambridge Analytica was a scandal worthy of congressional hearings. LLMs are personalized persuasion on steroids.


I still feel "weird" trying to reason about GenAI content or looking at GenAI pictures sometimes. Some of it is so off-putting in a my-brain-struggles-to-make-sense-of-it way.


To me the answer was fairly obvious—default to using your own thinking first


More insane than specifically developing AIs to write software, creating competition from machines as well? As a group we’re not exactly rational.


The question for me is, assuming that other companies are more humane and are avoiding laying people off before Christmas, what does January look like?


That's a bold assumption.

It used to be true, but companies seem deadset on demoralizing their workforce as much as possible.

They've raised prices as much as consumers will bear.

We're getting near the physical limits of how efficient things can get in many sectors.

If P/Es are to remain this high (and they have to for the rich to remain this rich), the profits must continue to grow far in excess of the total economy.

The only orange left to squeeze is labor costs.


Companies, especially large ones, generally don't think like that. Q1 can be an active quarter for layoffs (Q1 2023 was probably the most active layoff quarter since COVID), but generally, they happen when they happen (Q2-2025 was more active than Q1-2025).

One very real factor that actually does act as a gravitational pull for doing layoffs in Oct/Nov Q4 is insurance elections. Waiting until January can create a lot of complicated paperwork and cost more. There is also very much a sense that doing layoffs before the holidays, while giving severance pay, is more humane than waiting, because they can "spend time with family" (real words I've heard stated).


I can see the logic. Layoffs are always terrible. But if I am getting laid off anyway, I would prefer to know about it before I spend a whole bunch of money during holidays.


According to usinflationcalculator.com $100K today is more like $64K in 2007.


Eh… that’s abrupt. What changed so suddenly? Or are the numbers faked?


article's source @ https://downtownseattle.org/programs-services/research/econo...

    Violent crime incidents in Seattle police’s M sectors
    (the downtown core) declined 36% between June–August 2025
    compared to the same period in 2024. 
    This was the lowest number of incidents for the summer
    period since 2017.


Violent crime was down in practically every major city this summer. It's not really sudden--it's been trending down the last 2 years


To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.


I'm sure if you email him explaining how stupid he is he'll send you his Nobel prize.

Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.

It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.


I want to ignore the flame fest developing here. But, in case you are interested in hearing a doubter's perspective, I'll try to express one view. I am not an expert on Penrose's ideas, but see this as a common feature in how others try to sell his work.

Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.


"That human intelligence involves processes that algorithms cannot emulate" is the conclusion of his argument. The premise could be summed up as something like "humans have complete, correct perception of mathematical truth", although there is a lot of discussion of in what sense it is "complete" and "correct" as, of course, he isn't arguing that any mathematician is omniscient or incapable of making a mistake.

Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.


Hmm, I am less than certain this isn't still begging the question, just with different phrasing. I.e. I see how they are "linked" to the point they seem almost tautologically the same rather than a deductive sequence.


You realise that this isn’t even a reply so much as a series of insults dressed up in formal language?

Yes, of course you do.


It wasn't intended as an insult and I apologise if it comes across as such. It's easy to say things on the internet that we wouldn't say in person.

It did come from a place of annoyance, after your middlebrow dismissal of Penrose' argument as "stupid".


And you do it again, you apologise while insulting me. When challenged you refuse to defend the points you brought up, so that you can pretend to be right rather than be proved wrong. Incompleteness theorem is where the idea came from, but you don’t want to discuss that, you just want to drop the name, condescend to people and run away.


Here are the substantive things you've said so far (i.e. the bits that aren't calling things "stupid" and taking umbridge at imagined slights):

1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.

2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)

3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.

4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.

That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.

As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.


The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.

Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.

And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.


I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.


The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.


Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.

Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.


True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.


Even this is a weak idea. There's nothing that restricts the term 'AGI' to a replication of animal intelligence or consciousness.


> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute

Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.

Lab grown diamonds are a thing.


Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.


The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...


Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.

If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.


Yeah, it's tricky to talk about in the span of a comment. I work on Things Involving Animals - animals provide an excellent counter-current to discussion around AGI, in numerous ways.

One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.

Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: