Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is annoying. It's not wrong, but it also doesn't come across as terribly relevant without suggesting some vaguely plausible alternative. The closest it comes is that everyone should go back to using "regular" search engines that will return "regular" pages. and that if all the players (browser makers + search engines) did that then everything would be just fine.

That'd be great, if that pristine Web still existed to search and people were happy with today's results of searching it. But in the real world, the Web is a pile of auto-generated and auto-assembled fragments of slop, SEO-optimized to death, puddled atop and all around the surviving fragments of value. (The value is still there! I suspect the total value in the Web has never stopped increasing. Just like those monkeys are always typing out more and more Shakespeare.) Also in the real world, people are decisively choosing the AI-generated summaries and fevered imaginings. Not for everything, but web search -> URL -> page visit is becoming a declining percentage that won't always be able to support everything that it does today.

It's not that I particularly want AI in my browser. I would say that I emphatically don't, except that automatic translation is really nice, and Firefox's automatic names for tab groups are pretty cool, and I'm sure here and there people will come up with other pieces. I'm actually ok with AI that targets real needs, which is 0.01% of what people are pushing it for. But I also think that we're past the point where NOT having AI in the browser is a sustainable position. (In terms of number of users and therefore financially.)

Should Mozilla be head over heels in love with AI, as it appears to be now? I'd definitely prefer if it weren't. But telling Mozilla "don't do bad thing, it'll make you irrelevant and have no users" is fine and dandy but ultimately pointless unless you have an alternative that doesn't require the entire world to cooperate in turning back the clock.

(Disclosure: Mozilla pays me a salary to write bugs.)

(And working code! I write some of that too!)

(And no, I currently don't do anything that adds AI to the browser, nor can I think of anything I'd want to work on that would add any AI.)



I think the vast majority of attempts to shoehorn ai into the browser are deeply unimaginative and very much garnering a "who ordered that?" reaction. But I suspect we're going to converge on some specific use cases that everyone's going to want and it might just be important to be in the game now as we collectively figure out what they are.

Recently some Ycombinator funded project got highly upvoted on HN, a Chrome based extension that used LLM capabilities to effectively do grease monkey style scripting live in response to human requests. Now that is interesting, and it's a specific application that's actually meaningful and it's not just another AI chat sidebar.

I think it's a matter of workshopping but I bet we're going to be discovering things users actually want that are not yet obvious to us. The example I keep thinking of is non-stupid agent tasking. I wouldn't mind an agent that browsed Amazon for Kindle unlimited hard sci-fi books with critical acclaim. I would be willing to be there's going to be numerous "whybdidn't I think of that" uses cooked up in the next few years.


It's certainly wrong on some things. For example, claiming that there are no examples of positive uses of LLMs is flat out wrong. I'm generally on the very conservative side when it comes to AI hype but even I'll admit it will be transformative in some ways.

Certainly not the insane fantasies most of the gen-AI CEOs are pushing, but, for example, it's clear beyond any shadow of a doubt that traditional search is dead. AI supported search is far superior in every way. For clarity, I'm talking about "deep research" style search where you get verifiable links to source materials along with your answers.

It's absolutely not crazy for Mozilla to get into this space, though I think if I were them I'd make an adjacent "AI search and agnentic AI first" web application adjacent to Firefox and keep Firefox as a legacy style browser. This would both give Mozilla a clean slate to do it right, while also keeping happy those who are not early adopters.


> But in the real world, the Web is a pile of auto-generated and auto-assembled fragments of slop

There are parts of the web like that but your assertion seems to rely on this being universally true. It clearly and obviously isn't.

> Also in the real world, people are decisively choosing the AI-generated summaries and fevered imaginings.

Are they "decisively" choosing it if it's turned on by default? If it were actually opt-in then we could measure this. As it is I don't think you have any data to rely on when making this assertion.

> Not for everything, but web search -> URL -> page visit is becoming a declining percentage

The same web search companies that own AI models they're trying to sell? Do you not suspect there could be a few confounding variables in this analysis?

> except that automatic translation is really nice

Which we already had and has nothing to do with language models masquerading as "AI".

> is fine and dandy but ultimately pointless unless you have an alternative that doesn't require the entire world to cooperate in turning back the clock.

An alternative to what? Tab renaming? Bad article summaries? Weak search engine algorithms?


> Are they "decisively" choosing it

I'm pretty sure chatgpt and perplexity are "opt-in" in this sense

> nothing to do with language models

how do you think translation works?


>> But in the real world, the Web is a pile of auto-generated and auto-assembled fragments of slop

> There are parts of the web like that but your assertion seems to rely on this being universally true. It clearly and obviously isn't.

In a sense, the whole web is like that and has been for a long time. Which is not surprising, 99% of everything is shit. We've just had tools that have been astoundingly successful at separating out the wheat from the chaff. With the advent of AI, there are two significant differences: (1) the scale is shit is vastly greater, we're at something like 99.999% shit and adding 9s steadily; and (2) every way of distinguishing shit from gold is being steadily overcome by AI advances.

>> Also in the real world, people are decisively choosing the AI-generated summaries and fevered imaginings.

> Are they "decisively" choosing it if it's turned on by default? If it were actually opt-in then we could measure this. As it is I don't think you have any data to rely on when making this assertion.

I'm not referring to things embedded in browsers. I agree with your counterargument there. I'm talking about people using chat interfaces for search, and search engines presenting AI-generated results. There is hard data that shows users moving away from doing traditional searches and clicking on individual pages. From a quick (traditional!) search: https://www.statista.com/statistics/1454204/united-states-ge... https://www.gartner.com/en/newsroom/press-releases/2024-02-1... (Gartner representing a pre-AI source of made-up bullshit with questionable origins).

>> Not for everything, but web search -> URL -> page visit is becoming a declining percentage

> The same web search companies that own AI models they're trying to sell? Do you not suspect there could be a few confounding variables in this analysis?

Undoubtedly. But what are you implying -- that you could start a no-AI search engine and be wildly successful today because people will use yours over the other engines that have AI summaries? That's kind of what you're implying for the browser (and one of the most common uses of a browser is a search engine).

>> except that automatic translation is really nice

> Which we already had and has nothing to do with language models masquerading as "AI".

Er, the language models are wiping the floor with more traditional semantic machine-learning based translation. It's kind of sad, but also cool that it works so well.

Oh. I guess you're making a distinction between LLMs and the not-so-large language models used for translation. I don't see much of a difference. They're transformer-based, they're intentionally stupid but work because they're fed huge amounts of data, etc.

>> is fine and dandy but ultimately pointless unless you have an alternative that doesn't require the entire world to cooperate in turning back the clock.

> An alternative to what? Tab renaming? Bad article summaries? Weak search engine algorithms?

The article summaries may be bad, but they are very popular and widely used. The search uses are not bad or weak, the only criticism I see is that it doesn't need to be kicked off directly from the browser. But fortunes are made on eliminating a single click.

I'm not claiming that having AI in the browser is super awesome. I personally haven't seen any killer use cases yet. I'm only saying that keeping AI 100% out of the browser is going to be a losing proposition. Not because it's so amazingly useful to have it, but because it's at least a little more useful and that's enough to choose one browser over another.

People are demonstrating that they're going to use AI whether you think it makes sense or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: