Hacker Newsnew | past | comments | ask | show | jobs | submit | namesbc's commentslogin

So the rosy biased estimate is OpenAI is saving 1 hour of work per day, so 5 hours total per-work week and 20 hours total per-month.

With a subsidized cost of $200/month for OpenAI it would be cheaper to hirer a part-time minimum wage worker than it would be to contract with OpenAI.

And that is the rosiest estimate OpenAI has.


The closest I come to working with part-time, minimum-wage workers is working with student employees. Even then, they earn more and usually work more than five hours a week.

Most of the time, I end up putting in more work than I get out of it. Onboarding, reviewing, and mentoring all take significant time.

Even with the best students we had, paying around 400 euros a month, I would not say that I saved five hours a week.

And even when they reach the point of being truly productive, they are usually already finished with their studies. If we then hire them full-time, they cost significantly more.


A part time minimum wage worker can't code

Check the wages of coders outside of the US

There use to be a mythological creature on irc from south America (sorry forgot the specifics) who was both a 10x dev and a 10x mathematician. One day he showed a picture of his computer. It was a low end laptop with a tft monitor and an external keyboard because the screen and the keyboard didn't work. It explained everything, the machine was just good enough to write code, do math, read stack exchange and lurk irc with his ghosts.

It you take of the rosy glasses, it is more like 10 hours saved per-month at an unsubsidized cost of $1000/month

The $100/hr is worth it for US programming jobs, but nothing else


What people here forget is coding is a tiny minority of the actual usage. ~5% if I remember correctly?

Their best market might just be as a better Google with ads


Yep, bulk of AI usage is generating marketing emails

Here's OpenAI's data on it: https://www.nber.org/system/files/working_papers/w34255/w342...

I don't think marketing emails are written enough to constitute the "bulk" of it, but writing in general seems to be


Software engineers used to know that measuring lines of code written was a poor metric for productivity...

https://www.folklore.org/Negative_2000_Lines_Of_Code.html


Ctrl-F 'lines', 0 results

Ctrl-F 'code', 0 results

What is this comment about?


"The LLM can write lines of code, sure, but can it be productive?" is, I think, the implied question.

The linked short story is barely 5 paragraphs long. You could have just read it instead of writing an insubstantial remark like this. It’s a fun anecdote about a famous programmer (Bill Atkinson).

I’ve read it multiple times before, it’s irrelevant in this discussion.

Measuring productivity by number of words written per day is as useless of a measure as number of lines of code written per day

Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...

Maybe it was edited. I count at least 6 instances of the word “code”

underyx was doing the ctrl+f on the original (horses) article, not the negative 2000 lines of code article.

It's a confusing comment. I misinterpreted it myself too originally.


So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.

They can already do that. A few months ago I played around with the kaggle python golf competition. Got to top 50 without writing a line of code myself. Modern LLMs can take a piece of code and "golf" it. And modern harnesses (cc / codex / gemini cli) can take a task and run it in a loop if you can give them clear scores (i.e. code length) and test suites outside of their control (i.e. the solution is valid or not).

No idea why you'd want this in a normal job, but the capabilities are here.


LLMs won't ever shut up. That seems unfixable. But a "hack" would perhaps be to train them to make longer patches but which actually removes code.

gonna tell claude to write all my code in one line

Imagine you are a Perl programmer writing js...

Ug, this bill is all about preventing regulation of multinational corporations, it has nothing to do with the right for individuals to compute anything.


Yep, I'd like to think this was a bill to protect my right to run any software I want to on hardware that I own, but it's actually a bill to keep Montana localities (cities, counties) from regulating or restricting data centers (noise, power, etc) in the interests of their residents. It's a state preemption bill restricting local democracy for the benefit of big business and polluters.


"Government actions that restrict the ability to privately own or make use of computational resources for lawful purposes, which infringes on citizens' fundamental rights to property and free expression, must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest in public health or safety."

They can absolutely be regulated, but you must prove actual harm instead of "I don't want any data centers near me because of (conspiracy theory I read on Facebook)."


Spending $1500 per-month is a crazy wasteful amount of money


That's 18k a year, or about equal or cheaper than "outsourcing", minus the tax and legal ramifications.

I agree it's wasteful, but from a long-form view of what spending looks like (or at least should/used to look like). Those who see 1.5k/month as "saving" money typically only care about next quarter.

As the old adage goes: a thousand dollars saved this month is 100 thousand spent next year.


I'll choose the wheel over using a country's worth of electricity to parrot unusable AI slop to gullible fools.


Is AI not useful to you? I've sped up my SWE work significantly (10x). Not sure why the cynicism.


If you're just talking about SWE work, thats only one segment of an economy and is the "virtual world". But humans have to live in the real world.

I believe the true revolution is going to be when AI can start living / interacting with the physical world. Driverless cars might be the start here.


If you are forcing your staff to use shitty tooling or be fired, then I bet you have a high attrition rate and a failing company.


We have a very successful company that has been running 30 years, with developers across 6 countries. We just make sure we hire developers who know that theyre here to do a job, on our terms, for which they will get paid, and its our way or the highway. If they dont like it, they dont have to stay. However, through doing this we have maintained a standard that our competitors fail at, partly because they spend their time tiptoeing around staff and their comforts and preferences.


and you happened to have created an account in hackernews just 3 months ago after 30 years in business just to hunt AI-sceptics?


I dont hunt 'AI skeptics'. I just provide a viewpoint based on professional experience. Not one that is 'AI is bad at coding because everyone on Twitter says so"


and you happened to have created an account in hackernews just 3 months ago after 30 years in business just to provide a viewpoint based on professional experience?


Yes, you're right I should have made an account 30 years ago, before this website existed, and gotten involved in all the discussions taking place about the use of ChatGPT and LLMs in the software development workplace


Have you ever hired anyone for their expertise, so they tell you how to do things, and not the other way around? Or do you only hire people who aren't experts?

I don't doubt you have a functioning business, but I also wouldn't be surprised if you get overtaken some day.


Most of our engineers are hired because of their experience. They don't really tell us how to do things. We already know how to do it. We just want people who can do it. LLMs will hopefully remove this bottleneck.


Wow, you are really projecting the image a wonderful person to work for.

I don't doubt you are successful, but the mentality and value hierarchy you seem to express here is something I never want to have anything to do with.


Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?: https://www.theguardian.com/technology/2024/sep/15/data-cent...


That doesn't solve the problem


Indeed, it doesn't solve the problem that people will misinterpret data and spread misinformation to justify their bad feeling about AI with invalid arguments.


It is the lack of regulation that is the problem here. The power company is incentized to make higher profits year around by not preparing for a disaster.


PG & E is "heavily regulated" and yet Newsome allowed them to not upgrade power lines which eventually caused massive wild fires


The fact that PG&E, in a much more regulated and liberal California, also has power problems is interesting and valid, but Newsom's only been governor since 2019 (the Camp Fire was 2018), so you can't put that much blame on him, and there are decades of blame to go around.


Part of the issue is that its listed on the stock market


All it takes is a simple tiny change to the user agreement and then OpenAI will start training on this user data and most people won't notice in time to switch their toggle off


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: