Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"It is 1958. IBM passes up the chance to buy a young, fledgling company that has invented a new technology called xerography. Two years later, Xerox is born, and IBM has been kicking themselves ever since. It is ten years later, the late '60s. Digital Equipment DEC and others invent the minicomputer. IBM dismisses the minicomputer as too small to do serious computing and, therefore, unimportant to their business. DEC grows to become a multi-hundred-million dollar corporation before IBM finally enters the minicomputer market. It is now ten years later, the late '70s. In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business." - Steve Jobs [1][2][3]

Now, "IBM CEO says there is 'no way' spending on AI data centers will pay off". IBM has not exactly had a stellar record at identifying the future.

[1] https://speakola.com/ideas/steve-jobs-1984-ad-launch-1983

[2] https://archive.org/details/1983-10-22-steve-jobs-keynote

[3] https://theinventors.org/library/inventors/blxerox.htm





> IBM has not exactly had a stellar record at identifying the future.

IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

I'm no fan of much of what IBM is doing at the moment but it could be argued that its consultancy/service orientation gives it a good view of how business is and is planning to use AI.


They also either fairly accurately predicted the death of HDDs by selling off their research division before the market collapsed, or they caused the end of the HDD era by selling off their research division. They did a lot of research.

I think the retail market is maybe dead but datacenters are still a fairly large customer I’d think. HDDs really shine at scale where they can be fronted by flash and DRAM cache layers.

They are still cheaper than flash for cold data, but that’s not going to hold for long. Flash is so much denser the acquisition cost difference for a multi-petabyte store becomes small next to the datacenter space and power needed by HDDs. HDDs require research for increasing density while flash can rely on silicon manufacturing advances for that - not that it doesn’t require specific research, but being able to apply the IP across a vast space makes better economical sense.

The hdd being dead will surely come as a surprise to the couple of 12TB rusties spinning joyously in my case right now.

One family down the street from me still drives a Saturn! Yet no one is going to say "Saturn isn't dead!".

Nope. It is like - suddenly every truck everywhere is Saturn. The unit count may be lower but the total tonnage moved grows. While HDD shipments have fallen fourfold counted in units, if you check total exabytes delivered you will see they are shipping a lot.

HDDs would be much more important today if flash storage didn’t exist.

did you know that SSD are not memory stable if they dont get electricity every now and again...

Spinning platters are prone to catastrophic mechanical failure and tape can undergo delamination. What of it?

Do you know how much data is stored on ssds in data centres?

The other way to look at it is that the entire consulting industry is teetering on catastrophe. And IBM, being largely a consulting company now, is not being spared.

IBM isn't failing, though. They're a profitable company with healthy margins, and enterprises continue to hire them for all sorts of things, in large numbers.

So now it makes more sense why they think the AI spending will/need to fail.

Because if it didn't, that's a direct replacement of them.


AI replaces nothing. A consultant or developer is not replaced with AI, he becomes more powerful with AI. An IBM consultant with AI is still way ahead of Johnny Startup with AI.

> The other way to look at it is that the entire consulting industry is teetering on catastrophe

Oh? Where'd you get that information?

If you mean because of AI, it doesn't seem to apply much to IBM. They are probably not great at what they do like most such companies, but they are respectable and can take the blame if something goes wrong. AI doesn't have these properties.


If anything there’s likely plenty of work for body shops like IBM in reviewing and correcting AI-generated work product that has been thrown into production recently.

This is a separate argument though. A failing company may still be right in identifying other companies failure modes.

You can be prescient about failure in one area and still fail yourself. There's no gotcha.


IBM is not a failing company though, they are a Goliath in the Enterprise space.

Still besides the point. The company failing or not is orthogonal to them being able to identify failure in others.

> A failing company may still be right in identifying other companies failure modes.

Agreed if this is what they are doing, but what if theyre spewing claims to try and discredit an industry in order to quell their shareholder concerns?


They are not the only ones looking at the money spent in AI datacentres and concluding most of the investment will not be recovered anytime soon.

A lot of the silicon being deployed is great for training, but inefficient for inference and the training to inference ratio for usage shows a clear tendency to go the inference way. Furthermore, that silicon, with the workloads it runs, doesn’t last long and needs replacement.

The first ones to go online might recover the investment, but the followers better have a plan to pivot to other uses.


IBM was making "calculating cheese cutters" back in the day [0].

I'm sure they can pivot to something else if the need arises.

[0]: https://imgur.com/a/ibm-cheese-cutter-Rjs2I


The whole point of a consultant is to let the execs blame someone else.

Nobody got fired for buying something Gartner recommended, or for following EY's advice to lay off/hire

I don't see AI taking that blame away.


They own Red Hat Linux, Ansible, OpenShift, and Terraform.

If you are doing anything in the Enterprise space, they probably have their claws in you be it on-prem or cloud.

And their work on quantum...

https://www.forbes.com/sites/baldwin/2025/11/25/inside-ibms-...

Not to mention they are still doing quite a bit of Mainframe...


IBM is/was good at inventing a lot of tech.

It may not be good at recognizing other good tech invented or paradigm changes by others


Nor is their CEO in any way unbiased.

> IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

Well put. “IBM was wrong about computers being a big deal” is a bizarre take. It’s like saying that Colonel Sanders was wrong about chicken because he, uh… invented the pressure fryer.


Nitpicking, IBM did non develop _the_ Apollo Guidance Computer (the one in the spacecraft with people), it was Raytheon. They did, however, developed the Launch Vehicle Digital Computer that controlled the Saturn rocket in Apollo missions. AGC had very innovative design, while LVDC was more conventional for that time.

I've heard some second hand stories about IBM's way of using "AI" and it is pretty much business oriented and not much of the glamour and galore promises the other companies make (of course you still have shiny new things in business terms). It's actually good entertainment hearing all the internal struggles of business vs fancy during the holidays.

For the fact that they invented Deep Blue, they are really struggling with AI

Their Granite family of models is actually pretty good! They just aren't working on the mainstream large LLMs that capture all the attention.

IBM is always very conscious of what their clients need (and the large consultancy business provides a very comprehensive view). It just turns out their clients don’t need IBM to invest in large frontier models.

ibm developed SSMs/mamba models and also releasing trainings datasets i think, also quantum computing is strategic option..

For sure but do you see them at any relevant leader boards? Any news how good they are?

I don't.

I know their models, but not because i constantly read about it



> In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business.

IBM released the 5100 in September 1975 [0] which was essentially a personal computer in feature set. The biggest problem with it was the price tag - the entry model cost US$8975, compared to US$1298 for the entry Apple II released in June 1977 (close to two years later). The IBM PC was released in August 1981 for US$1565 for the most basic system (which almost no one bought, so in practice they cost more). And the original IBM PC had model number 5150, officially positioning it as a successor to the 5100.

IBM’s big problem wasn’t that they were disinterested in the category - it was they initially insisted on using expensive IBM-proprietary parts (often shared technology with their mainframe/midrange/minicomputer systems and peripherals), which resulted in a price that made the machine unaffordable for everyone except large businesses, governments, universities (and even those customers often balked at the price tag). The secret of the IBM PC’s success is they told the design team to use commercial off-the-shelf chips from vendors such as Intel and Motorola instead of IBM’s own silicon.

[0] https://en.wikipedia.org/wiki/IBM_5100


And outsourcing the operating system to Microsoft, because they didnt consider it that important.

This is the exact kind of thinking that got us into this mess in the first place, and I'm not blaming you for it, it seems to be something all of us do to an extent. We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago. Underestimating a technology seems to stick much harder than overestimating one.

If you want to be seen as relevant in this industry, or as a kind of "thought leader", the easy trick seems to be to hype up everything. If you do that and you're wrong, people will quickly forget. If you don't and you're wrong, that will stain your reputation for decades.


Good point. That kind of thinking is an absurdity. Saying IBM dropped the ball 70 years ago without acknowledging that lessons were learned, leadership has changed hands a lot since then, and most importantly, the tech landscape back then was very different from today unless you grossly oversimplify everything amounts to nothing more than a fallacious opinion.

Not even much of an IBM fan, myself, but I respect their considerable contribution to the industry. Sure, they missed a shot back then, but I think this latest statement is reliably accurate based on the information we currently have.


It’s easy to be a pessimist. Most things don’t work. So in 9 out of 10 cases you’re right.

But human breakthrough progress came mostly through optimists, who tried things no one else dared to do.


The amount of hate I've received here for similar statements is astonishing. What is even more astonishing is that it takes 3-rd grade math skills to work out that the current AI(even ignoring the fact that there is nothing intelligent about the current AI) costs are astronomical and they do not deliver on the promises and everyone is operating at wild loses. At the moment we are at "if you owe 100k to your bank, you have a problem but if you owe 100M to your bank, your bank has a problem". It's the exact same bullshitter economy that people like musk have been exploiting for decades: promise a ton, never deliver, make a secondary promise for "next year", rinse and repeat -> infinite profit. Especially when you rope in fanatical followers.

I don't want to defend musk in any way but I think you are making a mistake there using him as an example because what boosted him quite a lot is that he actually delivered what he claimed. Always late but still earlier than anybody was guesstimating. And now he is completely spiraling but its a lot harder to lose a billion than to gain one so he persists and even gets richer. Plus his "fanatical" followers are poor. It just doesn't match the situation.

Sounds a lot like "I'm not racist but". There's a website dedicated to all of his bs https://elonmusk.today

He is the definition of a cult. Collects money from fanatical followers who will praise every word he says, never delivers, "oh next year guys, for sure, wanna buy a not a flamethrower, while you are at it?". Not to mention what once were laughable conspiracy theories about him turned out to be true(such that even I laughed when I heard them). Torvalds is right with his statement about musk: "incompetent" and "too stupid to work at a tech company".


I am just saying that he is a bad example because he is a different beast from the run of the mill corporate potemkin-ism.

How is any of that different from AI evangelists, be it regular hype kids or CEOs? "All code will be written by AI by the end of {current_year+1}". "We know how to build AGI by the end of {current_year+1}". "AI will discover new sciences". A quick search will draw a billion claims from everyone involved. Much like on here, where I'm constantly told that LLMs are a silver bullet and the only reason why they aren't working for me is because my prompts are not explicit enough or I'm not paying for a subscription. All while watching people submit garbage LLM code and breaking their computers by copy-pasting idiotic suggestions from chatgpt into their terminals. It is depressing how close we are to the Idiocracy world without anyone noticing. And it did not take 500 years but just 3. Everyone involved - altman, zukerberg, musk, pichai, nadella, huang, etc. are well aware they are building a bullshit economy on top of bullshit claims and false promises.

The last sentence sounds a lot like a (partial?) Ponzi scheme.

Pretty close really. Just enough to buy some plausible deniability I suppose.

> We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago.

The grandparent points to a pattern of failures whereas you point to Meta’s big miss. What you miss about Meta, and I am no fan, is that Facebook purchased Whatsapp and Instagram.

In other words, two out of three ain’t bad; IBM is zero for three.

While that’s not the thrust of your argument, which is about jumping on the problem of jumping on every hype train, the post to which you reply is not on about hype cycle. Rather, that post calls out IBM for a failure to understand the future of technology and does so by pointing to a history of failures.


> In other words, two out of three ain’t bad; IBM is zero for three.

Many others in this thread have pointed out IBM's achievements but regardless, IBM is far from "zero for three".


> Many others in this thread have pointed out IBM's achievements but regardless, IBM is far from "zero for three".

I was specifically commenting in the context of this thread.* I was not trying to characterize either IBM or Meta except with reference to the arguments offered by this thread’s ancestors.

I understood (and understand) that such scorekeeping of a company as storied as IBM is at best reductive and at worst misrepresentative.

* Your reference to “this thread” actually addresses sibling comments to OP (ggggp), not this thread which was started by gggp.


Got anything vis-a-vis the message as opposed to the messenger?

I'm not sure these examples are even the gotchas you're positing them as. Xerox is a dinosaur that was last relevant at the turn of the century, and IBM is a $300bn company. And if it wasn't obvious, the Apple II never made a dent in the corporate market, while IBM and later Windows PCs did.

In any case, these examples are almost half a century old and don't relate to capex ROI, which was the topic of dicussion.


If it's not obvious, Steve's quote is ENTIRELY about capex ROI, and I feel his quote is more relevant to what is happening today than anything Arvind Krishna is imagining. The quote is posted in my comment not to grandstand Apple in any sense, but to grandstand just how consistently wrong IBM has been about so many opportunities that they have failed to read correctly - reprography, mini computers and microcomputers being just three.

Yes it is about ROI: "IBM enters the personal computer market in November ’81 with the IBM PC. 1983 Apple and IBM emerged as the industry’s strongest competitors each selling approximately one billion dollars worth of personal computers in 1983, each will invest greater than fifty million dollars for R&D and another fifty million dollars for television advertising in 1984 totaling almost one quarter of a billion dollars combined, the shakeout is in full swing. The first major firm goes bankrupt with others teetering on the brink, total industry losses for 83 out shadow even the combined profits of Apple and IBM for personal computers."


I have no horse in this race.

I don’t think this is really a fair assessment. IBM is in fact a huge company today and it is possible that they are because they took the conservative approach in some of their acquisition strategy.

It is a bit like watching someone play poker and fold and then it turns out they had the high hand after all. In hindsight you could of course know that the risk would have been worth it but at the moment perhaps it did not seem like it given the money the first player would be risking.


> I don’t think this is really a fair assessment. IBM is in fact a huge company today and it is possible that they are because they took the conservative approach in some of their acquisition strategy.

I can also imagine IBM was being approached by hundreds, if not thousands, propositions. That they missed three that turned out to be big is a statistical probability.


A big difference is that in the past things like the potential of the PC were somewhat widely underestimated. And then the internet was again as well.

But in modern times it's rather the opposite scenario. The average entity is diving head first into AI simply expecting a revolutionary jump in capability that a more 'informed', for lack of any less snooty term, perspective would suggest is quite unlikely to occur anytime in the foreseeable future. Basically we have a modern day gold rush where companies and taking out unbelievably massive loans to invest in shovels.

The only way this doesn't catastrophically blow up is if AI companies manage to convince the government they're too big to fail, and get the Boeing, Banks, et al treatment. And I expect that's exactly the current strategy, but that's rather a high risk, low reward, type strategy.


>things like the potential of the PC were somewhat widely underestimated.

The potential of the AI that comes within reach at maximum expenditure levels may just be more widely overestimated.

The potential to make "that much money" even more challenging.

A very opposite scenario.

I think so many corporations are looking at how expensive actual humans always have been, and can be sure will always be, so much so that it's a major cost item that can not be ignored. AI opens up the possibility of a whole new level of automation or outright replacement for the routine simple-minded tasks, to a degree that never existed before. More jobs could possibly be eliminated than previous waves of mechanical and digital automation.

When you do the business math, the savings could be enormous.

But you can only realistically save as much as you are actually wasting, otherwise if you go too far you shoot yourself in the foot.

Even with all that money to work with, if you're in practice hunkering down for savings because you can't afford real people any more, you surely can't say the sky's the limit. Not like selling PC's or anything that's capable of more unbridled growth.

When PC's arrived they flew off the shelf even at their high initial retail prices.

People in droves (but not the silent majority) are shunning free AI and the movement is growing with backlash in proportion to the foisting.


I have no special knowledge about IBM Vs Apple historically, but: a quarter billion in CAPEX when you've earned a billion in revenue in a single year is extremely different to what we're seeing now. These companies are spending all of their free cash flow, then taking on debt, to the tune of percentage points of world GDP, and multiples of any revenue they've seen so far. That kind of oversupply is a sure fire way to kill any ROI.

>the message as opposed to the messenger?

Exactly.

The message is plain to see with very little advanced math.

The only news is that it is the CEO of IBM saying it out loud.

IMHO he has some of the most credible opinions at this scale that many people have seen.

It's "highly unlikely" that all this money will be paid back to everyone that invested at this point. The losers probably will outnumber the winners, and nobody knows whether it will end up becoming a winner-take-all situation yet. A number of wealthy players remain at the table, raising stakes with each passing round.

It's so much money that it's already too late to do anything about it, and the full amount hasn't even changed hands yet.

And the momentum from something so huge can mean that almost the entire amount will have to change hands a second time before a stable baseline can be determined relative to pre-existing assets.

This can take longer than anyone gives credit for just because of massiveness, in the mean time, established real near-term growth opportunities may languish or even fade as the skew in rationality/solvency balance awaits the rolling dice to come to rest.


> Got anything vis-a-vis the message as opposed to the messenger?

Sure: People disagree. It's not like there is anything particularly clever that IBM CEO provided here. The guy not investing in something saying it won't work is about as good as the people who do saying it will. It's simply different assumptions about the future.


Would you read this if I (a nobody) told you and not the "CEO of IBM"? In that case it's completely fair to question the messenger.

I read the actual article.

He is pointing out that the current costs to create the data centres means you will never be able to make a profit to cover those costs. $800 Billion just to cover the interest.

OpenAI is already haemorrhaging money and the space data centres has already been debunked. There is even a recent paper that points out that LLMs will never become AGI.

The article also finishes out with some other experts giving the same results.

[edit] Fixed $80 to $800


$800B, to be clear is the claim, not $80B.

Clearly I need to read slower. Thanks. :)

While AGI might be the Holy Grail, AI doesn’t need to be general human-level to be useful and profitable.

it just needs us to wait one more year right?

It's already quite useful. While not all AI service providers are profitable, I've worked on projects that saved a lot of money for the company - a lot more than it cost us running the servers.

>> There is even a recent paper that points out that LLMs will never become AGI.

can you share a link?


Took me a while to find again, as there are a lot of such papers in this area.

https://www.arxiv.org/pdf/2511.18517


A single author, in a physics department. Seems unlikely to be groundbreaking or authoritative.

Welcome to the world of papers. Have a read and get back to us. Dismissing out of hand is rarely constructive.

took me a while but i read it. thought it was actually a pretty good and well researched paper that does a good job rationalizing its thesis. thanks for sharing

Ad hominem right out of the gate? Really?

Is this AI paper written by a reputable subject matter expert? It seems to be written by a physicist and also be the only academic work by this author in English

So you are dismissing it because of that? Certainly read the paper first and attack the arguments, not the author. It even has 10 pages of citations.

I have read it. It is nothing new on the subject, but it was just the recent paper I saw on HN and the person was asking for the link.

The crux is an LLM is and can never be intelligent in the sense of an AGI. It is easier to think of it as a way to store and retrieve knowledge.


How many articles on this topic do we imagine there are? Thousands? Hundreds of thousands? It is hopeless to read every one by any author, no matter how unrelated to the domain, and judge them individually on their merits. Being a subject domain expert is not a perfect measure of paper quality but it's the only feasible way to make a first pass at filtering.

Even if I did read it, I have no hope of understanding if it has made a fundamental mistake because I don't have the subject matter expertise either.

(I imagine it has made a fundamental mistake anyway: for LLMs to be useful progress toward AGI they don't have to be a feasible way to create AGI by themselves. Innovation very often involves stepping through technologies that end up only being a component of the final solution, or inspiration for the final solution. This was always going to be an issue with trying to prove a negative.)


> It is hopeless to read every one by any author,

It was a paper posted on HN a few days ago and someone asked for the evidence of my statement. I supplied it.

Now if they actually read it and disagreed with what it was saying, I'd be more than happy to continue the conversation.

Dismissing it just because you don't understand is a terrible thing to do to yourself. It's basically sabotaging your intelligence.

Sometimes papers are garbage, but you can only make that statement after you have read/understood it.

Use an LLM if you want.


I was really just asking, not trying to be dismissive. Expertise is an important context to evaluate a piece of writing.

Absolutely. If it is not written by someone who has real world experience and deep knowledge it has no more value than a HN comment.

It's a good read and good citations.

The core piece as quoted from the abstract: "AGI predictions fail not from insufficient compute, but from fundamental misunderstanding of what intelligence demands structurally."

Then goes in detail as to what that is and why LLMs don't fit that. There are plenty other similar papers out there.


It was more of a general principle than about specific paper that I mentioned that :)

Sry to say but the fact that you argue with LLMs never become AGI, you are not up-to-date.

People don't assume LLM will be AGI, people assume that World Models will lead us to AGI.

I personally never asumed LLM will become AGI, i always assumed that LLM broke the dam for investment and research into massivce scale compute ML learning and LLMs are very very good in showing were the future goes because they are already so crazy good that people can now imagine a future were AGI exists.

And that was very clear already when / as soon as GPT-3 came out.

The next big thing will probably be either a LOT more RL or self propelling ai architecture discovery. Both need massive compute to work well but then will potentially provide even faster progress as soon as humans are out of the loop.


> People don't assume LLM will be AGI,

I wish that was true.

> people assume that World Models will lead us to AGI.

Who are these people? There is no consensus around this that I have seen. You have anything to review regarding this?

> as soon as GPT-3 came out.

I don't think that was true at all. It was impressive when it came out, but people in the field clearly saw the limitations and what it is.

RL isn't magical either. Google AlphaGo as an example often required human intervention to get the RL to work correctly.


AlphaGo Zero doesn't need much human intervention at all.

Regarding world models: All the big ones. LeCun, Demis Hassabis, Fei-Fei Li too. And they are all working on it.

LLMs will definitly play some type of role in AGI. After all you can ask an LLM already a lot of basic things like 'what are common tasks to make a tea'. A type of guide, long term fact memory or whatever this can be called.


> AlphaGo Zero doesn't need much human intervention at all

You should research it and not just read news articles. RL did not work and required human intervention numerous times before it got close to what it is now.


Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.

I mentioned it in my other comment but people like LeCun, Demis Hassabis, Fei-Fei Li do.

There are indications that Open AI is doing this but nothing official as far as i know and i have not heard anything from Anthropic.


IBM is an interesting beast when it comes to business decisions. While I can't give exact details, their business intelligence and ability to predict monetary things is uncannily spot-on at times.

So, when their CEO says that this investment will not pay off, I tend to believe them, because they most probably have the knowledge, insight and data to back that claim, and they have ran the numbers.

Oh, also, please let's not forget that they dabbled in "big AI" before everyone else. Anyone remembers Deep Blue and Watson, the original chatbot backed by big data?


As evidenced by the fact that they are a 100+ year old company that still exists. People forget that.

We can cherry-pick blunders made by any big company to make a point. Maybe it would be more honest to also list companies IBM passed on that turned out to be rubbish? And all the technologies that IBM did invest in that made them a ton of money and became industry standards?[0]

Today, Xerox has less total revenue than IBM has profit. DEC went out of business 27 years ago. Apple is an in astoundingly great place right now, but Jobs got kicked out of his own company, and then returned when it was about to fail, having to take investment from Microsoft(!) in order to stay afloat.

Meanwhile, IBM is still here, making money hand over fist. We might not have a ton of respect for them, being mostly a consulting services company these days, but they're doing just fine.

[0] As another commenter points out: https://news.ycombinator.com/item?id=46131245


Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

> Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

Shhh. You are not allowed to ruin OpenAI’s PPU value. Can’t make the E7’s feel bad.


No, but the comment above and variations of it are mentioned in every thread about IBM, so it’s probably just a reflex at this point without much thought behind it.

“If you’re not happy you can sell your shares”

Xerox is clearly crushing it in 2025... /s

I'm typing this comment from an Apple MacBook, whose interface is a direct result of Xerox PARC allowing Steve Jobs to view the Alto. Xerox was extremely innovative at that time, and with the right leadership, could have become #1 in personal computing.

That's completely beyond the point, though? Kodak invented the digital camera, did not think anything about it and others then ate their lunch. Those others are also not crushing it in 2025. The point is IBM is not the go-to to listen about AI. Also not saying they are not right, even a broken clock is right 2 times a day.

> The point is IBM is not the go-to to listen about AI.

Why not, though? For better or worse, they're a consulting services company these days, and they work with an eye-wateringly large number of companies. I would expect them to have a very good view as to what companies use AI for, and plan/want to use AI for in the future. They may not be experts in the tech itself, but I think they're decently well-positioned to read the tea leaves.


DEC went down the drain, Xerox is 1/1000 of IBM's market cap. IBM made its own, superior by its relative openness, personal computer that ended up running the world, mostly maintaining direct binary compatibility for 40+ years, even without IBM really paying attention.

How much did IBM itself benefit from the PC? I thought the clones ate their lunch there

Wikipedia says their PC revenue was twice Apple's by 1984 at $4 billion/year. Not bad for a side hustle?

My understanding is that clones were a net positive, just like widespread Windows/Office piracy is a net positive for MS.


fair enough

What does that have to do with the current CEO's assessment of the situation?

[flagged]


A revolution means radical changes executed over a short period of time. Well with 4 years in, this has got to be one of the smallest "revolutions" we have ever witnessed in human history. Maybe it's revolutionary for people who get excited about crappy pictures they can insert into their slides to impress the management.

The AI astroturfing campaign.

If you had billions to gain, would you invest a few 100k or millions in an astroturfing campaign?


every other day antrophic comes up with a new "AI is scary" marketing campaign. Like https://www.bbc.com/news/articles/cpqeng9d20go (AI blackmails our employee episode) or https://time.com/7335746/ai-anthropic-claude-hack-evil/ (Our model turned evil and hacked us omgg)

They put these stories out just to make the general public (who might not understand that this is just bs) but makes AI seem scary so people get a lopsided view of AI and capacities that are straight out of science fiction.

Millions is an understatement on how much AI marketing spend is


You definitely want to be standing in front of a chair when the music stops.

IBM sees the funding bubble bursting and the next wave of AI innovation as about to begin.

IBM was too early with "Watson" to really participate in the 2018-2025 rapid scaling growth phase, but they want to be present for the next round of more sensible investment.

IBM's CEO is attempting to poison the well for funding, startups, and other ventures so IBM can collect itself and take advantage of any opportunities to insert itself back into the AI game. They're hoping timing and preparation pay off this time.

It's not like IBM totally slept on AI. They had Kubernetes clusters with GPUs. They had models and notebooks. But their offerings were the absolute worst. They weren't in a position to service real customers or build real products.

Have you seen their cloud offerings? Ugh.

They're hoping this time they'll be better prepared. And they want to dunk on AI to cool the playing field as much as they can. Maybe pick up an acquisition or two on the cheap.


How exactly are they poisoning the well..? OpenAI committed to 1.4 trillion investements...with a revenue of ~13B - how is IBM CEO contributing to that absolutely already poisoned situation? Steve Jobs did not care about naysayers when he introduced iPhone - because his product was so innovative for the time. According to AI boosters, we now have a segment of supposedly incredibly powerful and at the same time "dangerous" AI products. Why are they not sweeping the floor off with the "negators", "luddites", "laggards" etc... After so many hundreds of billions of dollars and supposedly so many "smart" AI researchers...Where are the groundbreaking results man? Where are the billion-dollar startups launched by single persons (heck, I'd settle even for a small team)...Where are the ultimate applications..etc?

50 year grudges are not relevant there is no one still at ibm that worked there in 1977, IMHO.

It’s the ship of Theseus in corporate form. Even if all the people are gone but the culture hasn’t changed, is the criticism inaccurate?

> Even if all the people are gone but the culture hasn’t changed

Can you expand on this? What was the culture then versus now?

For example back then it was the culture to have suit inspectors ensure you had the right clothes on and even measure your socks. (PBS Triumph of the Nerds)


I mean, okay, but you're taking the current leadership's words and claiming they are incorrect because IBM management was not great at identifying trends decades ago. Historical trend is not an indicator of the future and it's not engaging in good faith on the conversation if overspending on AI can be backed by revenue in the future. You're attacking the messenger instead of the message.

I’m saying given IBMs track record of completely failing at innovation repeatedly and failing at investing on the correct technologies, why do you assume today’s CEO is better at it and has bucked the milquetoast culture that has pervaded IBM? It’s a company that has largely divested its ability to provide solutions and technology and turned more into a large shop consultancy (+ milking their legacy mainframe contracts and whatnot).

Except that, over the same time period, IBM was also very successful at innovation and investing in technologies. Yes, they made some very high-profile misses that the top poster lists, but they were still a powerhouse doing other stuff during that time (as a commenter replying to that points out).

Except all the technologies listed were basically in the heydey of the 70s and 80s. Gerstner redirected IBM into an enterprise services business in the early 90s and by the late 90s that's where all their money was. The 2002 acquisition of PwC consulting cemented this shift. In the mid 2010s under Ginni Rometty IBM software became secondary and it abandoned selling it as standalone software instead bundling it into its consultancy.

So IBM hasn't been doing hardware R&D for about three decades and abandoned software R&D well over a decade ago. R&D hasn't been in their DNA for a long time, their previous contributions notwithstanding.


> So IBM hasn't been doing hardware R&D for about three decades

Even a 5 second google search says you are wrong.

https://research.ibm.com/semiconductors/ai-hardware-center

https://research.ibm.com/topics/quantum-hardware


Having a research department is very different from whether they are actually accomplishing anything and whether or not it is a meaningful revenue stream. On all fronts this is just not true. The quantum stuff so far in the industry is vapor ware and its unclear wether IBM is really making forward progress vs yearly pronouncements of progress to keep the funding stream open

You said they hadn’t one. Google search said you were wrong.

You don’t get to change the goalposts.


I didn't read the top level comment as dismissive or 'proving it wrong', but rather as adding context, or even being humorous somewhat

I don't understand how calling something "a ship of Theseus in corporate form" or "culture hasn't changed" etc, is not dismissive of the actual comment by the CEO on AI overspending. They dismissed the content of the message by saying IBM's culture sucks, is how i read it. Also things can be funny and dimissive at the same time, they often are.

Culture evolution can be very fast, yet some cultures stick around for a very long time.

"The amount being spent on AI data centres not paying off" is a different statement to "AI is not worth investing in". They're effectively saying the portions people are investing are disproportionately large to what the returns will end up being.

It's a difficult thing to predict, but I think there's almost certainly some wasteful competition here. And some competitors are probably going to lose hard. If models end up being easy to switch between and the better model is significantly better than its competitors, than anything invested in weaker models will effectively be for nothing.

But there's also a lot to gain from investing in the right model, even so it's possible those who invested in the winner may have to wait a long time to see a return on their investment and could still possibly over allocate their capital at the expense of other investment opportunities.


> IBM has not exactly had a stellar record at identifying the future.

This would be very damning if IBM had only considered three businesses over the course of seventy years and made the wrong call each time.

This is like only counting three times that somebody got food poisoning and then confidently asserting that diarrhea is part of their character.


Right, you just missed the part where DEC went out of business in the 90s. And IBM is still around, with a different business model.

Steve Jobs, the guy that got booted out of his own company and that required a lifeline from his arch nemesis to survive?

This is all true, but it was only true in hindsight and as such does not carry much value.

It's possible that you are right and AI is 'the future' but with the present day AI offering I'm skeptical as well. It isn't at a level where you don't have to be constantly on guard against bs and in that sense it's very different from computing so far, where reproducibility and accuracy of the results were important, not the language that they are cast in.

AI has killed the NLP field and it probably will kill quite a few others, but for the moment I don't see it as the replacement of general computing that the proponents say that it is. Some qualitative change is still required before I'm willing to check off that box.

In other news: Kodak declares digital cameras a fad, and Microsoft saw the potential of the mp3 format and created a killer device called the M-Pod.


But how many companies did IBM pass on that did crash and burn ? And how many did it not pass on and did decently ? They're still around after more than 3 generations worth of tech industry. They're doing something right.

TLDR Cherrypicking


You, or your existence, probably triggers multiple transactions per day through a POWER mainframe without you even knowing it. Their mainframes handle the critical infrastructure that can't go down.It's so reliable we don't even think about it. I shudder to think about Microsoft or Apple handling that.

How about check out how many companies exist today vs existed in 1958? If you look at it that way then just surviving is an achievement in itself and then you might interpret their actions as extremely astute business acumen.

IBM is still alive and kicking well, and definitively more relevant than Xerox or DEC. You are completely misconstruing Jobs’ point to justify the current AI datacenter tulip fever.

This isn't even a great argument at a literal level. Nowadays nobody cares about Xerox and their business is selling printers, DEC was bought by Compaq which was bought by HP. Apple is important today because of phones, and itself was struggling selling personal computers and needed a (antitrust-motivated) bailout from Microsoft to survive during the transition.

Yet here they are at the front of Quantum Computing research

Didn't also pass on SAP at some point? I think I read that somewhere.

So, is the napkin math wrong, or you are just going by the company history?

Cool story, but it’s more than just the opinion of this CEO. It’s logic.

Hardware is not like building railroads, the hardware is already out of date once deployed and the clock has started ticking on writing off the expense or turning a profit on it.

There are fundamental discoveries needed to make the current tech financially viable and an entire next generation of discoveries needed to deliver on the over inflated promises already made.


You could try addressing the actual topic of discussion vs this inflammatory and lazy "dunk" format that frankly, doesn't reflect favorably on you.

For some strange reason a lot of people were attracted by a comment that speaks about everything else BUT the actual topic and its the top comment now. Sigh.

If you think that carefully chosen anecdotes out of many many more are relevant, there needs to be at least an attempt of reasoning. There is nothing here. It's really just barebones mentioning of stuff intentionally selected to support the preconceived point.

I think we can, and should, do better in HN discussions, no? This is "vibe commenting".


The idea that a company DNA somehow lives over 100 years and maintains the same track record is far fetched.

that the OpenAI tech bro are investing in AI using a grown up ROI is similarly far fetched, they are burning money to pull ahead of the reset and assume the world will be in the palm of the winner and there is only 1 winner. Will the investment pay off if there are 3 neck and neck companies ?


this is cherry picking.

I’m sorry, but this is stupid, you understand that you have several logical errors in your post? I was sure Clinton is going to win 2016. Does that mean that when I say 800 is bigger than 8 is not to be trusted?

Do people actually think that running a business is some magical realism where you can manifest yourself to become a billionaire if you just believe hard enough?


The post is almost worse than you give it credit for. Like it doesn't even take into account different people are making the decisions.

Hot Hand Fallacy.

Even a broken watch is right twice per day



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: