Hacker Newsnew | past | comments | ask | show | jobs | submit | hiddencost's commentslogin

Every Google publication goes through multiple review. If anyone thinks the publication is a competitor risk it gets squashed.

It's very likely no one is using this architecture at Google for any production work loads. There are a lot of student researchers doing fun proof of concept papers, they're allowed to publish because it's good PR and it's good for their careers.


The amazing thing about this is the first author has published multiple high-impact papers with Google Research VPs! And he is just a 2nd-year PhD student. Very few L7/L8 RS/SWEs can even do this.

Underrated comment, IMHO. There is such a gulf between what Google does on its own part, and the papers and source code they publish, that I always think about their motivations before I read or adopt it. Think Borg vs. Kubernetes, Stubby vs. gRPC.

I thought you were being sarcastic at the start.

Vermont bans billboards on high ways. It's so nice.


My parents were architects and my sister and I lived our first few years in Honolulu before moving to the SF Bay Area. There were no billboards in Hawaii, and I recall distinctly the first drive from SFO the the East Bay. I was unable to avoid reading and staring at every billboard next to the freeway and it literally made me throw up. I didn't understand what was happening.

Of course, I was quickly conditioned off of that response to billboards, which I consider natural.


He's arguably the most famous performance engineer. I've followed his work for 15 years.

Screaming at Disk drives has been my go-to party trick to break ice the last decade.

Uh it's because the Ellison's bought CBS and put Bari Weiss in charge. That's why it's bad. Because they're bad people with the goal of ending American democracy.

Idk. They look like they're ahead on the saturated benchmarks and behind on the unsaturated ones. Looks more like that over fit to the benchmarks.

Because Google already has many healthy revenue streams that will benefit from LLMs and all it has to do in the AI space is remain competitive.

I don't think you understood the parent post.

The point is that this is not repeatable: curing HIV isn't something we now know how to do.

The second point is: this did not give us a significant new insight into the causes or mechanisms of treatment of HIV


> curing HIV isn't something we now know how to do.

Technically we do but we will never ever give someone a stem cell transplant to cure their HIV because there are SIX highly effective different classes of medication to treat HIV. Majorly treatment resistant high fitness HIV is NOT a concern on the horizon.

> The second point is: this did not give us a significant new insight into the causes or mechanisms of treatment of HIV

The first unique cases of both variants of this DID lead to significant, valuable insights in several areas. But further cases, not so much. Myeloablation clearing the HIV reservoirs while the patient continues being on ART leading to a total cure does not excite any knowledgeable scientist anymore in 2025.


Automated AI detection tools do not work. This whole article is premised on an analysis by someone trying to sell their garbage product.

Yeah that is the premise all of these articles/tools just conveniently brush off. “We detected that x%… “ OK, and how do I know ur detectiok algorithm is right?

Usually the detectors are only called in once a basic "smell test" has failed. Those tests are imperfect, yes, but Bayesian probability tells us how to work out the rest. I have 0 trouble believing that the prior probability of an unscrupulous individual offloading an unpleasant and perceived-as-just-ceremonial duty to the "thinking machine" is around 20%. See: https://www.youtube.com/watch?v=lG4VkPoG3ko&pp=ygUZdmVyaXRhc...

I think they're using technical language that you're aware of, and you're misinterpreting the title of a research paper.

That presumes that performance improvements are necessary for commercialization.

From what I've seen the models are smart enough, what we're lacking is the understanding and frameworks necessary to use them well. We've barely scratched the surface on commercialization. I'd argue there are two things coming:

-> Era of Research -> Era of Engineering

Previous AI winters happened because we didn't have a commercially viable product, not because we weren't making progress.


The labs can't just stop improvements though. They made promises. And the capacity to run the current models are subsidized by those promises. If the promise is broken, then the capacity goes with it.

> the capacity goes with it.

Sort of. The GPUs exist. Maybe LLM subs can’t pay for electricity plus $50,000 GPUs, but I bet after some people get wiped out, there’s a market there.


Datacenter GPU's have a lifespan of 1-3 years depending on use. So yes they exist, but not for long, unless they go entirely unused. But then they also deprecate in efficiency compared to new hardware extremely fast as well, so their shelf life is severely limited either way.

Personally I am waiting for the day I can realistically buy a second hand three year old datacentre GPU so I can run Kimi K2 in my shed. Given enough time, not a pipe dream. But 10 years at least.

You'll probably be able to run Kimi K2 on the iphone 27.

This is why I find the business case of putting datacenters in orbit to be so stupid. And yet there are several startups saying they are gonna do just that.

At this pace, it won't be many years before the industry is dependent on resource wars in order to sustain itself.

> They made promises.

That's not that clear. Contracts are complex and have all sorts of clauses. Media likes to just talk big numbers, but it's much more likely that all those trillions of dollars are contingent on hitting some intermediate milestones.


Maybe those promises can be better fulfilled with products based on current models.

We still don't have a commercially viable product though?

I've fed thousands of dollars to Anthropic/OAI/etc for their coding models over the past year despite never having paid for dev tools before in my life. Seems commercially viable to me.

> I've fed thousands of dollars to Anthropic/OAI/etc for their coding models over the past year despite never having paid for dev tools before in my life. Seems commercially viable to me.

For OpenAI to produce a 10% return, every iPhone user on earth needs to pay $30/month to OpenAI.

That ain’t happening.


They don't sell their models to individuals only but also to companies with most likely different business and pricing models so that's an overly simplistic view of their business. YoY their spending increases, we can safely assume that one of the reasons is the growing user base.

Time will probably come when we won't be allowed to consume frontier models without paying anything, as we can today, and time will come when this $30 will most likely become double or triple the price.

Though the truth is that R&D around AI models, and especially their hosting (inference), is expensive and won't get any cheaper without significant algorithmic improvements. According to the history, my opinion is that we may very well be ~10 years from that moment.

EDIT: HSBC has just published some projections. From https://archive.ph/9b8Ae#selection-4079.38-4079.42

> Total consumer AI revenue will be $129bn by 2030

> Enterprise AI will be generating $386bn in annual revenue by 2030

> OpenAI’s rental costs will be a cumulative $792bn between the current year and 2030, rising to $1.4tn by 2033

> OpenAI’s cumulative free cash flow to 2030 may be about $282bn

> Squaring the first total off against the second leaves a $207bn funding hole

So, yes, expensive (mind the rental costs only) ... but forseen to be penetrating into everything imagineable.


>> OpenAI’s cumulative free cash flow to 2030 may be about $282bn

According to who, OpenAI? It is almost certain they flat out lie about their numbers as suggested by their 20% revenue shares with MS.


A bank - HSBC. Read the article.


Not sure where that math is coming from. Assuming it's true, you're ignoring that some users (me) already pay 10X that. Btw according Meta's SEC filings: https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... they made around $22/month/american user (not even heavy user or affluent iPhone owner) in q3 2023. I assume Google would be higher due to larger marketshare.

A banks sell side analyst team, which is quite different.

If you fed thousands of dollars to them, but it cost them tens of thousands of dollars in compute, it’s not commercially viable.

None of these companies have proven the unit economics on their services


If all frontier LLM labs agreed to a truce and stopped training to save on cost, LLMs would be immensely profitable now.


Those are effectively made up numbers, since they're given to him by an anonymous source we have no way of corroborating, and we can't even see the documents themselves, and it contradicts not just OpenAI's official numbers, but first principles analyses of what the economics of inference should be[1] and the inference profit reports of other companies, as well as just an analysis of the inference market would suggest[2]

[1]: https://martinalderson.com/posts/are-openai-and-anthropic-re..., https://github.com/deepseek-ai/open-infra-index/blob/main/20...

[2]: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...



google what you just said and look at the top hit

it's a AI summary

google eats that ad revenue

it eats the whole thing

it blocked your click on the link... it drinks your milkshake

so, yes, there a 100 billion commercially viable product


Google Search has 3 sources of revenue that I am aware of: ad revenue from the search results page, sponsored search results, and AdSense revenue on the websites the user is directed to.

If users just look at the AI overview at the top of the search page, Google is hobbling two sources of revenue (AdSense, sponsored search results), and also disincentivizing people from sharing information on the web that makes their AI overview useful. In the process of all this they are significantly increasing the compute costs for each Google search.

This may be a necessary step to stay competitive with AI startups' search products, but I don't think this is a great selling point for AI commercialization.


And so ends the social contract of the web, the virtuous cycle of search engines sending traffic to smaller sites which collect ad revenue which in turn boosts search engine usage.

To thunderous applause.


Thank god. The fake search results, the money that manipulates our access to information. all gone. Finally we can try something else. I have a feeling it's going to be worse though.

I don’t think the models are smart at all. I can have a speculative debate with any model about any topic and they commit egregious errors with an extremely high density.

They are, however, very good at things we’re very bad at.


Have you considered the AI is right, and you make the mistakes?

> the models are smart enough, what we're lacking is the understanding and frameworks necessary to use them well

That’s like saying “it’s not the work of art that’s bad, you just have horrible taste”

Also, if it was that simple a wrapper of some sort would solve the problem. Maybe even one created by someone who knows this mystical secret to properly leveraging gen AI


Besides building the tools for proper usage of the models, we also need smaller, domain specific models that can run with fewer resources

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: