Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Often someone’s personal productivity with AI means someone else have to dig through their piles of rubbish to review PR they committed.

In your particular case it sounds like you’re rapidly loosing your developer skills, and enjoy that now you have to put less effort and think less.



We know that relying heavily on Google Maps makes you less able to navigate without Google Maps. I don't think there's research on this yet, but I would be stunned if the same process isn't at play here.


Whatever your mind believes it doesn’t need to hold on to that what is expensive to maintain and run, it’ll let go of. This isn’t entirely accurate from a neuroscience perspective but it’s kinda ballpark.

Pretty much like muscles decay when we stop using them.


Sure, but sticking with that analogy, bicycles haven’t caused the muscles of people that used to go for walks and runs to atrophy either – they now just go much longer distances in the same time, with less joint damage and more change in scenery :)


>> Whatever your mind believes it doesn’t need to hold on to that what is expensive to maintain and run, it’ll let go of. This isn’t entirely accurate from a neuroscience perspective but it’s kinda ballpark.

>> Pretty much like muscles decay when we stop using them.

> Sure, but sticking with that analogy, bicycles haven’t caused the muscles of people that used to go for walks and runs to atrophy either ...

This is an invalid continuation of the analogy, as bicycling involves the same muscles used for walking. A better analogy to describe the effect of no longer using learned skills could be:

  Asking Amazon's Alexa to play videos of people
  bicycling the Tour de France[0] and then walking
  from the couch to the your car every workday
  does not equate to being able to participate in
  the Tour de France[0], even if years ago you
  once did.
0 - https://www.letour.fr/en/


Thanks for putting the citation for the Tour de France. I wouldn't have believed you otherwise.


> Thanks for putting the citation for the Tour de France. I wouldn't have believed you otherwise.

Then the citation served its purpose.

You're welcome.


Oh, but they do atrophy, and in devious ways. Though the muscles under linear load may stay healthy, the ability of the body to handle the knee, ankle, and hip joints under dynamic and twisting motion does atrophy. Worse yet, one may think that they are healthy and strong, due to years of biking, and unintentionally injure themselves when doing more dynamic sports.

Take my personal experience for whatever it is worth, but my knees do not lie.


Sure, only cycling sounds bad, as does only jogging. And thousands of people hike the AT or the Way of St. James every year, despite the existence of bicycles and even cars. You've got to mix it up!

I believe the same holds true for cognitive tasks. If you enjoy going through weird build file errors, or it feels like it helps you understand the build system better, by all means, go ahead!

I just don't like the idea of somehow branding it as a moral failing to outsource these things to an LLM.


Yeah, but what's going to happen with LLMs is that the majority will just outsource thinking to the LLM. If something has a high visible reward with hidden, dangerous risks, people will just go for the reward.


Ok Socrates, let’s go back to memorizing epic poems.


To extend the analogy further, people who replace all their walking and other impact exercises with cycling tend to end up with low bone density and then have a much higher risk of broken legs when they get older.


Well, you still walk in most indoor places, even if you are on the bike as much as humanly possible.

But if you were to be literally chained to a bike, and could not move in any other way than surely you would "forget"/atrophy in specific ways that you wouldn't be able to walk without relearning/practicing.


> Whatever your mind believes it doesn’t need to hold on to that what is expensive to maintain and run, it’ll let go of. This isn’t entirely accurate from a neuroscience perspective but it’s kinda ballpark.

A similar phenomena occurs when people see or hear information and whether they record it in writing or not. The act of writing the percepts, in and of itself, assists in short-term to long-term memory transference.


I know that I am better at navigating with google maps than average people, because I navigated for years without it (partly on purpose). I know when not to trust it. I know when to ignore recommendations on recalculated routes.

Same with LLMs. I am better with it, because I know how to solve things without the help of it. I understand the problem space and the limitations. Also I understand how hype works and why they think they need it (investors money).

In other words, no, just using google maps or ChatGPT does not make me dumb. Only using it and blindly trusting it would.


Yeah this definitely matches my experience and guess what? Google maps sucks for public transit and isn't actually that good for pedestrian directions (often pointing people to "technically" accessible paths like sketchy sidewalks on busy arterial roads signed for 35mph where people go 50mph). I stopped using Google maps instinctually and now only use it for public transit or drives outside of my city. Doing so has made me a more attentive driver, less lazy, less stressed when unexpected issues on the road occur, restored my navigation skills, and made me a little less of, frankly, an adult man child.

Applying all of this to LLMs has felt similar.


Gets worse for projects outsourced to 1+ Consultancy firms, where staff costs are prohibitively high, now you've got another layer of complexity to factor in (risks, costs).

Consultancy A submit work, Consultancy B reviews/tests. As A increases the use of AI, B will have to match with more staff or more AI. More staff for B, mean higher costs, at slower pace. More AI for B, means higher burden of proof, an A vs B race condition is likely.

Ultimately clients will suffer from AI fatigue and inadvertently incur more costs at later stage (post-delivery).


My own code quality is better with AI, because it makes it feasible to indulge my perfectionism to a much greater degree. Before AI, I usually needed to stop sooner than I would have liked to and call it good enough. Now I can justify making everything much more robust because it doesn’t take a lot longer.

It’s the same story with UI/UX. Previously, I’d often have to skip little UI niceties because they take time and aren’t that important. Now even relatively minor user flows can be very well polished because there isn’t much cost to doing so.


https://github.com/plandex-ai/plandex/blob/9017ba33a627c518a...

Well your perfectionism needs to be pointed towards this line. If you get truly large numbers of users this will either slow down token checking directly or your process for removing ancient expired tokens (I'm assuming there is such a process...) much slower and more problematic.


Lol is that really the best example you could find?


Truly the response of someone who is a perfectionist using llms the right way and not a slop coder


It's just funny because there are definitely examples of bad code in that repo (as there are in any real project), but you picked something totally routine. And your critique is wrong fwiw—it would easily scale to millions of users. Perhaps you could find something better if you used AI to help you...


I’d love not to have to be great at programming, as much as I enjoy not being great at cleaning the canalization. But I get what you mean, we do lose some potentially valuable skills if we outsource them too often for too long.


It’s probably roughly as problematic as most people not being able to fix even simple problems with their cars themselves these days (i.e., not very).


Everyone needs to have AI to do some minor modification in Excel file?


Of course not. Who is arguing for that?


Give it time. They will, eventually.


This is so baseless and insulting and makes so many assumptions I don’t think you deserve a response from me at all.


> In your particular case it sounds like you’re rapidly loosing your developer skills, and enjoy that now you have to put less effort and think less.

Just the other day I was complaining that no one knows how to use a slide rule anymore...

Also C++ is producing bytecode that's hot garbage. It's like no one understands assembly anymore...

Even simple tools are often misused (like hammering a screw). Sometimes they are extremely useful in right hands though. I think we'll discover that the actual writing of code isn't as meaningful as thinking about code.


Hahaha well said, thank you. I feel like I’m taking crazy pills reading some of the comments around here. Serious old man shakes fist at cloud moments.


I'm losing my developer skills like I lost my writing skills when I got a keyboard. Yes, I can no longer write with a pen, but that doesn't mean I can't write.


Also I don’t know about you but despite the fact that I basically never write with a pen, the occasional time I have to I’m a little slow sure but it’s not like I physically can’t do it. It’s no big deal.

Imagine telling someone with a typewriter that they’d be unable to write if they don’t write by hand all the time lol. I write by hand maybe a few times a year - usually writing a birthday card or something - but I haven’t forgotten.


Yep, same. I might have forgotten some function names off the top of my head, but I still know how to program, and I do every day.


Exactly


Another way of viewing it would be that LLMs allow software developers to focus their development skills where it actually matters (correctness, architecture etc.), rather than wasting hours catering to the framework or library of the day’s configuration idiosyncrasies.

That stuff kills my motivation to solve actual problems like nothing else. Being able to send off an agent to e.g. fix some build script bug so that I can get to the actual problem is amazing even with only a 50% success rate.


The path forward here is to have better frameworks and libraries, not to rely on a random token generator.


Sure, will you write them for me?

Otherwise, I’ll continue using what works for me now.


>better frameworks and libraries

I feel like the past few decades of framework churn has shown that we're really never going to agree on what this means


You still have to review and understand changes that your “AI agent” did. If you don’t review and fully understand everything it does, then I fear for your project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: