Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone wrote on HN the (IMO) main reason why people do not accept AI.

  AI is about centralisation of power
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).

For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)

So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.





> AI is about centralisation of power

I do not believe this is the main reason at all.

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.

Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.


It sounds like you agreed by the end, just with a slightly different way of getting there.

> AI is about centralisation of power > So basically, only a few companies that hold on the large models will have all the knowledge required to do things,

There are open source models and these will continue to keep abreast of new features. On device only models are likely to be available too. Both will be good enough especially for consumer use cases. Importantly it is not corporations alone that have access to AI. I for-see whole countries releasing their versions in an open source fashion and much more. After all you can't stop people applying linear algebra ;-)

There doesn't appear to be a moat for these organisations. HN users mention hopping from model to model like rabbits. The core mechanic is interchangeable.

There is a 'barrier to entry' of sorts that does exert some pressure or centralisation particularly at scale. It conveniently aligns well for large corporations and it is that GPU's are expensive and AI requires a lot of processing power. But it isn't the core issue.


> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes

You're absolutely right, this is another face of this AI coin... We people are taught to do things and love doing them and we're scared it's going to be taken away from us. This is what I thought when writing about the man who cancelled CS course. He apparently predicted that learning algorithms solving won't make him happy because AI will do it for him


Hardware is a different problem from software. Once the chores robot gets here, then what? Millions of houseworkers are now out of a job. Then how will you feel about AI?

I don't think that's the reason either.

AI is just not that good. If it really made me more productive, why wouldn't I use it all the time? I'd get everything done before lunch and go home. Or I'd use it all day to do the work of 3 people and be on the fast track to promotions.

The problem is simply that it gets in the way. For things I know nothing about, AI is excellent. For things that I'm good at and have literally been doing for a decade+, I can just do it better and faster myself, and I'm tired of people who know nothing about my profession gaslighting me into thinking that LLMs do the same thing. And I'm really tired of people saying "oh AI is not good today, but it'll be good tomorrow so just start using them" -- fine, wake me up when it's good because I've been waiting and patiently testing every new SOTA model since 2023.

Just get the facts right, that's all I ask of tech execs. Why has AI become a religion?


> Or I'd use it all day to do the work of 3 people and be on the fast track to promotions.

Or, like execs want, you do work of 3 people, so we can fire two and get the bonus, plus maybe a 5% pay increase for you. "If someone is good at digging, give him a bigger shovel".


My impression is purely from a metrics perspective, people who were underperforming can really look like they are 3x as productive. AI is a real increase for them because it can do things they couldn't. It just gives them the ability to publish a thousand lines worth of PRs a day, which the regular and over performers have to review, but that shows up in THEIR metrics, not in the underperformer's metrics. If all you look at is metrics and KPIs and have no technical understanding, this looks amazing to you.

Most people I've worked with that were already some of the most productive before AI took off are still at the top, and AI didn't move the needle much for them. There's simply no way for them to do 3x the work.


But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).


> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?

AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.


It appears to me the early exponential gains from new models have plateaued. Current gains seem very marginal, it could be the future model best model that needs "1TB of HBM and specialized TPUs" won't be all that better than the models we have today. All we need to do is wait for commodity hardware that can run current models, and OpenAI / Anthropic et al are done if their whole plan to monetize this is to inject ads into the responses. That is, unless they can actually create AGI that requires infrastructure they control, or some other advancement.

That's what I was thinking as I was listening to the "be like clippy" video linked in the parent. Those local models probably won't be able to match the quality of the big guys' for a long time to come, but for now the local, open models have a lot of potential for us to escape this power consolidation before it's complete and still get their users 75-80% of the functionality. That remaining 20-25%, combined with the new skill of managing an LLM, is where the self-value comes in, the bit that says, "I do own what I built or learned or drew."

The hardest part with that IMO will be democratizing the hardware so that everybody can afford it.


Hopes that we all will be running LLM models locally in the face of skyrocketing prices on all kinds of memory sound very similar to the cryptoanarchists' ravings about full copies of blockchain stored locally on every user's device in the face of exponential growth of its size.

The only difference is that memory prices skyrocketing is a temporary thing resulting from a spike in demand from incompetent AI megalomaniacs like Sam Altman who don't know how to run a company and are desperate to scale because that's the only kind of sustainability they understand.

Once the market either absorbs that demand (if it's real) or else over-produces for it, RAM prices are going to either slowly come back down (if it's real) or plunge (if it isn't).

People are already running tiny models on their phones, and there's a Mistral 3B model that runs locally in a browser (https://huggingface.co/spaces/mistralai/Ministral_3B_WebGPU).

So we'll see what happens. People used to think crypto currencies were going to herald a new era of democratizing economic (and other) activity before the tech bros turned Bitcoin into a pyramid scheme. It might be too late for them to do the same with locally-run LLMs but the NVidias and AMDs of the world will be there to take our $.


There is a case that the indices owned by the major search engines are a form of centralization of power. Normal people and smaller companies would have to pay a lot of money to get indices for their new competing search engine. However the analogy falls apart when you look at a) the scale of the investments involved and b) the pervasiveness of the technology.

Creating a search engine index requires several orders of magnitude less computing power then creating the weights of an LLM model. Like it is theoretically possible for somebody with a lot of money to spare to create a new search index, but only the richest of the rich can do that with an LLM model.

And search engines are there to fulfill exactly one technical niche, albeit an important one. LLMs are stuffed into everything, whether you like it or not. Like if you want to use Zoom, you are not told to “enrich your experience with web search”, you are told, “here is an AI summary of your conversation”.


Exactly. I was paying for Gemini Pro, and moved to a Claude subscription. Am going to switch back to Gemini for the next few months. The cloud centralization, in its current product stage, allows you to be a model butterfly. And these affordable and capable frontier model subscriptions, help me train and modify my local open weight models.

Economies of scale makes this a space that is really difficult to be competitive in as a small player.

If it's ever to be economically viable to run a model like this, you basically need to run it non-stop, and make money doing so non-stop in order to offset the hardware costs.


No, the main reason people don't accept AI is that it isn't very good[1] at the things they want to accomplish.

Everyone I know accepts AI for the things it is good at, and rejects it for things it sucks at. The dividing line varies by task and the skill of the operator (both "how good at persuading the AI" and "how easy would it be to just do the job by hand").

In some companies the problem is management layers with thin understanding trying to force AI into the organization because they read some article in CIO Magazine. In other companies (like Microsoft) I suspect the problem is that they're forcing the org to eat their own dogfood and the dogfood kinda sucks.

[1] Yet.


> This enhances my work. However other people seem to feel threatened.

I wish people would stop spreading this as if it were the main reason. It’s a weak argument and disconnected from reality, like those people who think the only ones who dislike cryptocurrencies are the ones who didn’t become rich from it.

There are plenty of reasons to be against the current crop of AI that have nothing to do with employment. The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking, the decrease in quality of life (e.g. people who live next to noisy data centres)… Not everything is about jobs and money, the world is bigger than that.


Don't forget the fact that most of the time, the AI tools don't actually work well enough to be worth the trouble.

AI meeting notes are great! After you spend twice as long editing out the errors, figuring out which of the two Daves was talking each time, and removing all the unimportant side-items that were captured in the same level of detail as the core decision.

AI summaries are great - if you're the sort of person that would use a calculator that's wrong 10% of the time. The rest of us realize that an hour spent reading something is more rewarding and useful than an hour spent double checking an AI summary for accuracy.

AI as Asbestos isn't even an apt comparison, both are toxic and insidious, but Asbestos at least had a clear and compelling use case at the time. It solved some problems both better and cheaper than available alternatives. AI solves problems poorly and at higher cost, and people call you "threatened" if you point that out.


Asbestos was a great product! We used it absolutely everywhere for a reason: fireproof, a great insulator, and you can literally make fabric out of it. If it wasn't for the whole lung cancer thing, it would've still been as common as plastic.

AI, on the other hand? Seems like we're mostly getting cancer.


AI summaries are great for orgs that dont do them (AI better than nothing) but not that great for those orgs that have an agenda and a note taker. What is very rare. Bur better quality

No, having no summary can be more valuable than having a wrong summary.

The problem is that the wrong summary will be treated as the truth, as the original recording will of course have been deleted after a grace period. Oh, you're looking into a way to clean up hanging child processes spawned by your CI worker? Guess it's now on the record that "rvba mentions looking into the best way to kill his children without leaving a trace"! There's no way that could possibly be misinterpreted a few years down the line, right?


No way. Anything written down becomes the source of truth 3+ months later. Either write it down correctly or don’t write it down at all.

Ngl I feel like most people only accept these criticisms of AI because they’re against AI to begin with. If you look at the claims, they fall apart pretty quickly. The environment issue is negligible and has less to do with AI than just computing in general, the consolidation of resources assumes that larger more expensive AI models will outcompete smaller local models and that’s not necessarily happening, the spread of misinformation doesn’t seem to have accelerated at all since AI came about (probably because we were already at peak misinformation and AI can’t add much more), the decay in critical thinking is far overblown if not outright manipulated data.

About the only problem here is the increase of surveillance and you can avoid that by running your own models, which are getting better and better by the day. The fact that people are so willing to accept these criticisms without much scrutiny is really just indicative of prior bias


Would you say there's at least fifty percent of people who are informed about these?

It’s not clear to me what exactly you’re asking, but I’ll try to answer it anyway. I’d say that of the people who are against AI, fewer than 50% (to use your number) aren’t against it solely (or primarily, or at all) because they feel threatened for their job. Does that answer your question?

I meant more about this part:

> The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking

My question was that how many people are actually concerned about those things? If you think about it it's kind of obvious but it takes conscious effort to see it and I suspect not many people do.


anecdotal but it seems many people are concerned about those things

I know, right? And very often I have the notion that that these things are obvious concerns for all sensible people. Except that lately I've been noticing how for some reason I only get that notion as long as I'm regularly consuming one information-containing product or another.

When I turn off my browser, video player, and ebook reader, outside it's a bit of a hellscape really, I really can't wait to get back online where people care about the real things, such as systemic collapse. But while I'm disconnected I do notice how the only thing that people seem to actually be enjoying right now are those self-same glass beads and plague blankets of Big Tech that we're dissing while trapped within them.


Yes.

Source: My ass.

Would it make their concerns less valid however if it wasn't?


You might have read that wrong.

Of course not. I think more people should be aware of this especially talking about the majority who are outside of our own bubble. If you go to a random place and interact with a random person, you will likely encounter the dominating group, and that, I think, directly corelates to what is going to happen next.


You can still run your local cloud and AI providers will be heavily consolidated to a few.

While for programming task I do use Claude currently, local models can be tuned to serve 80% of the time reduction you win by using AI. Depends a bit on the work you do. This will improve probably, while frontier models seem to hit hard ceilings.

Where I would disagree is that joining concepts or knowledge works at all with current AI. It works decently bad in my opinion. Even the logical and mathematical improvements of the latest Gemini model don't impress too much yet.


Local models are fine for the way we have been using AI, like as a chatbot, or a fancy autocomplete. But everyone is craming AI into everything. Windows will be an agentic OS whether we like it or not. There will be no using your own local model for that use case. It is looking like everything is moving that way.

Hmmm, maybe use a different OS? I would never dream of using Windows to get any type of work done myself and there are many others like me. There certainly are choices. If you prefer to stay, MCP services can be configured to use local models, and people are doing so on Windows as well (and definitely with MacOS and Linux). From an OS instrumentation perspective, I think MacOS is probably the most mature -- Apple has acknowledged MCP and intends a hybrid approach defaulting to their own in house, on device, models, but by embracing MCP appears to be allowing local model access.

I think it is incredibly healthy to be critical and perhaps even a tinge cynical about the intentions of companies developing and productizing large language models (AI). However, the argument here completely ignores the evolving ecosystem of open weight models. Yes, the prominent companies developing frontier models are attempting to build markets and moats where possible, and the capital cloud investments are incredibly centralized. But even in 2025 the choice is there, with your own capital investment (RTX, MacBook etc.), for completely private and decentralized AI. You can also choose your own cloud too -- Cloudflare just acquired Replicate. If enough continue to participate in the open weight ecosystem, this centralization need not be totalitarian.

Karpathy recently did an interview where he says that the future of AI is 1b models and I honestly believe him. The small models are getting better and better, and it’s going to end up decentralizing power moreso than anything else

but the opposite is actually true. u can use ai to bypass a lot of SaaS solutions

So you are saying now that you can bypass a lot of solutions offered by a mix of small/large providers by using a single solution from a huge provider, this is the opposite of a centralization of power?

>"by using a single solution from a huge provider"

The parent didn't say that though and clearly didn't mean it.

Smaller SaaS providers have a problem right now. They can't keep up with the big players in terms of features, integrations and aggressive sales tactics. That's why concentration and centralisation is growing.

If a lot of specialised features can be replaced by general purpose AI tools, that could weaken the stranglehold that the biggest Saas players have, especially if those open weights models can be deployed by a large number of smaller service providers or even self hosted or operated locally.

That's the hypothesis I think. I'm not sure it will turn out that way though.

I'm not sure whether the current hyper-competitive situation where we have a lot of good enough open weights models from different sources will continue.

I'm not sure that AI models alone will ever be reliable enough to replace deterministic features.

I'm not sure whether AI doesn't create so many tricky security issues that once again only the biggest players can be trusted to manage them or provide sufficient legal liability protection.


with ai specialized hardware you can run the open source models locally too and without without the huge provider stealing your precious IP

ah, so what you are saying is this: now you can buy your own specialized hardware, which is realistically produced and sold by a single company on earth, compete with ~3 of the largest multinational corporations to do so (consider the ram prices lately, to get a sense of the effect of this competition), spend tens of thousands in the process, and run your 'own' model, which someone spends millions to train and makes it open for some reason (this is not a point about its existence, its about its reliability. I don't think its wise to assume the open models will be roughly in line with SOTA forever). This way, by spending roughly 1-2 orders of magnitude more, you can eliminate a handful of SaaS products that you use.

Sorry, I don't see this happening, at least not for the majority. Even if it does, it would still be arguably centralizing.


Sorry but your SQL comparison is way off. SQL is deterministic, has a defined implementation that databases must follow and when you run a statement it presents a query plan.

This is the absolute opposite to using an LLM. Please stop using this comparison and perhaps look for others, like for example, a randomised search engine.


hear me out though: What if every time you used an sql query it made a bunch of stuff up? 80% of the time its what you want but sometimes it just imagines data and pulls it out of its butt

You’re absolutely right!

You're missing the point entirely. He's saying it's horses for courses, each tool has its use and you use the right tool for the job.

And he's right. LLMs are fancy text query engines and work very well as such.

The problem is when people try to shoehorn everything into LLMs. That's a disaster yet being perused vigorously by some.


I think the conflicting point you missed is that "right tool for the job" also implies "right tool". If you don't think that probabilistic output counts as "query response" then LLMs are the "wrong tool" for any text query engine. If a database engine returned the right answer only X% of the time, you would say the database engine is faulty and find another. LLMs are probabilistic algorithms that by their very nature cannot hit 100% accuracy. (Especially as you get into the specifics of things like the lossy "compression" mechanics of tokenization and vectorization. The training set isn't 100% represented without some loss, either.) That doesn't seem like a good fit for a "query engine" tool in a database sense to some of us.

In practice they seem to work well for that at a surface level, most of the time. The complaint is not that LLMs are not a tool for the job of "fancy text query engine", the complaint is that at scale and in the long run, LLMs are not a good tool for that.


For lots of jobs of “text querying” they do good enough of a job to be on par with humans (which are not infallible either).

And there are applications where you don’t have/wouldn’t pay another human, and the job that an AI does for mere cents is good enough most of the times. Like doing an analysis on a legacy codebase. I’ll read and verify, but running that “query” then saved me a lot of time.

Not everything needs to be deterministic to be of value.


I agree, they can be "practical tools for the job", that's where I ended my comment. The disagreement seems to be that "practical tool for the job" is the same as "right tool for the job". A hammer can be a practical tool for the job of screwing a nail into a wall (once, at least) but few would call it the right tool for that job. An LLM can be a practical tool for a text query (at least as a first pass, at least with review and a grain of sand), but if you need reliability or repeatability or the ability to send results directly to a customer without a human in the loop it may not be the right tool for the job.

There's obviously a value in practical tools, deterministic or not. It's just worth making the distinction that a practical tool is not always fit for purpose as the "right" tool if you really are seeking the (most) right tool for the job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: