I do NOT want search to become any fuzzier than it already is.
See the great decline of Google's search results, which often don't even have all the words you're asking about and likely omits the one that's most important, for a great example.
> I do NOT want search to become any fuzzier than it already is.
For a specialized shop site you may want it. Search term: "something 150", the client is looking for a 1.5m something, if you're doing an exact text search your search engine will give you a lot of noise. Or you'll have to fiddle with synonyms, dictionaries and how you index your products with a huge chance to break other types of search queries.
How many sites will have useful results to return for a "something 150"? Muzzle width? Bees? T-shirt size? Walking distance? You surely cannot want _all_ these categories yet you'll get them all in a list. I might be biased but today's fuzzy search is a dumpster fire, sites hating to return only two results so they bury anything relevant in a tidal wave of unrelated garbage. I have office mates like that and everybody hates them as well.
My current case is: whatever you'll look for in a hardware store. So anything yeah: muzzle width, wood length, protective gear, liquid quantities, animal food etc.
And depending on the client vertical they tend to not use the same vocabulary when looking for products.
But contrary to some other comments I know LLM are not magical tools and anything we use will require data to fine tune whatever base model we choose. And it will be used on top of standard text search not as a full replacement. I'm sure many companies are currently doing the exact same thing or will be soon enough.
But this is why LLMs are so amazing. They understand context and nuance, and they have reasoning skills now. So you will not get a long list of garbage from a good model.
I want both fuzzy search and exact search. Google still has the "I'm feeling lucky" button, so it can support multiple search buttons. It could default to fuzzy search and have an "I'm feeling unlucky" button for exact search.
I don't necessarily want search to become any fuzzier than it already is either, but what's happened has happened and I've already responded to the decline of traditional search engines. Nowadays I pretty much only search duckduckgo with site:(something), or else I ask perplexity the question and for some links. Traditional search engines now just give a thousand SEOed-to-death articles, probably generated by ai, from hundreds of pointless third party websites that just have the same basic milk.
It might be that it's worth it to bifurcate soon. Search indexes and AI engines, doing different roles. The index would have to be sorted with AI though - to focus on original and first-party material and to downrank ad-driving slop.
Anything people ask a human to do instead of a computer.
Humans are not the most reliable. If you're ok giving the task to a human then you're ok with a lower level of relisbility than a traditional computer program gives.
Simple example: Notify me when a web page meaningfully changes and specify what the change is in big picture terms.
We have programs to do the first part: Detecting visual changes. But filtering out only meaningful changes and providing a verbal description? Takes a ton of expertise.
With MCP I expect that by the end of this year a nonprogrammer will be able to have an LLM do it using just plugins in a SW.
With suitable safeguards or limits on what it can spend why not? On the one hand it might not fear repercussions as a human would, on the other hand it’s far less likely to embezzle funds to support its overly lavish lifestyle or gambling addiction.
Yeah, you could marry an AI and share a bank account with it, and now it could buy you useful stuff it thinks you need without you doing anything, or even buy you presents.
I don't know about you, but even as a senior engineer, my employer hasn't given me the ability to spend money :-) It's not something employers normally do.
And as was pointed out, if you use something like MCP, you can control what it spends on. You can limit the amount, and limit to a whitelist. It may still occasionally buy the wrong thing, but the wrong thing will be something you preapproved.
To elaborate — the task definition itself is vague enough that any evaluation will necessarily be vibes based. There is fundamentally no precise definition of correctness/reliability.
I'm afraid that css is so broken that even AI won't help you to generalize centering content. Otoh, in the same spirit you are now a proficient ios/android developer where it's just "center content - BOOM!".
Why do you think this is only a meme? Flow modes, centering methods and content are still at odds with each other and don't generalize. This idiotic model cannot get it right unless you're designing for a very specific case that will shatter as soon as you bump its shoulder.
Edit: I've been in the AI CSS BS loop just a few days ago, not sure how you guys miss it. I start screaming f-'s and "are you an idiot" when it cycles through "doesn't work", "ignored prereqs" and "doesn't make sense at all".
What if I have text nodes in the mix? And I don't know that in advance, e.g. I'm doing <div>{content}</>? What if this div is in a same-class flexbox and now its margins or --vars clash with the defaults of its parent, which it knows nothing about by the principle of isolation? Then you may suggest using wrapper boxes for all children, but how e.g. align-baseline crosses that border is now a mystery that depends on a bunch of other properties at each side.
Your reply is correct, but it's exactly that "just do this specific configuration" sort of correct, which punctures component isolation all the way through and makes these layers leak into each other, creating a non-refactorable mess.
looking at most websites, regardless of how much money and human energy has been spent on them:
yes I think we're okay with divs not being centered some of the time.
many millions have been spilled to adjust pixels (while failing to handle loads of more common issues), but most humans just care if they can eventually get what they want to happen if they press the button harder next time.
(I am not an LLM-optimist, but visual layout is absolutely somewhere that people aren't all that picky about edge cases, because the success rate is abysmally low already. it's like good translations: it can definitely help, and definitely be worth the money, but it is definitely not a hard requirement - as evidence I point to the vast majority of translated software.)
Humans can extract information quicker from proper layouts. A good layout brings faster clarity in your head. What developers often get wrong: it's not just about doing something, it's also about how simple and fast to parse and understand it was (from a visual point of view as well, of course information architecture and UX matter a lot as well). Not aligning things is a slippery slope. If you can't center a div, probably all the other things that are more complex in your website / app are going to be off or even broken. Thankfully AIs can center divs by now, but proper grid systems understanding is at best frontier.
I could imagine a vision-enabled transformer model being useful to create a customizable “reading mode”, that adjusts page layout based on things like user prefs, monitor/window size, ad recognition, visual detail of images, information density of the text, etc.
Maybe in an alternate universe where every user-agent enabled browser had this type of thing enabled by default, most companies would skip site design all together and just publish raw ad copy, info, and images.
Neither. They're describing the philosophical similarities of:
* "Has only been that way so far because that's how computers are" and
* "I just want to center the damn content.
I don't much care about the intricacies of using
auto-margin, flexbox, css grid, align-content, etc."
Centering a div is seen as difficult because complexities that boil down to "that's just how computers are", and they find (imo rightful) frustration in that.
This sounds like a front-end dev that understands the intricacies of all of this when, again, this person is saying "I just want the content centered".
> again, this person is saying "I just want the content centered".
You can't just want. It always backfires. It's called being ignorant. There are always consequences. I just want to cross the road without caring too. Oh the cars might just hit me. Doesn't matter?
> This sounds like a front-end dev that understands the intricacies of all of this
That's the person that's supposed to do this job? Sounds bog standard. What's the problem?
At some point, sure; but there is always value in comprehending why someone might find an existing flow overly obtuse and/or frustrating when they "just want to do a simple thing".
To imagine otherwise reminds me of The Infamous Dropbox Comment.
Addendum: to wit, whole companies, like SquareSpace and Wix, exist because web dev is a pain and WYSWIG editors help a lot
> Addendum: to wit, whole companies, like SquareSpace and Wix, exist because web dev is a pain and WYSWIG editors help a lot
But these companies DO care (or at least that's the point) and don't "just want to do a simple thing".
The point of outsourcing is to give it to a professional with expertise like seeing a doctor. Dropbox isn't "just a simple thing" either, so no not the same.
The human or "natural" interface to the outside world. Interpreting sensor data, user interfacing (esp natural language), art and media (eg media file compression), even predictions of how complex systems will behave
I unironically use llm for tax advice. It has to be directionally workable and 90% is usually good enough. Beats reddit and the first page of Google, which was the prior method.
That is search. Like Google, you need to verify accuracy of what you get told. An LLM that talks then quotes only government docs would be best so you can quickly check. Any conclusions the LLM makes about tax are suspect.
I think you miss my point. A 100% accurate llm would also be helpful, but is a different use case. Sometimes the tax guidance are incomplete or debatable. Sometimes reasonable, plausible, or acceptable is the target.
Shopping assistant for subjective purchases. I use LLMs to decide on gifts, for example. You input the person's interests, demographics, hobbies, etc. and interactively get a list of ideas.
I think the only thing where you could argue is it's preferred is creative tasks like fictional writing, words smithing, and image generation where realism is not the goal.
I used Copilot to play a game "guess the country" where I hand it a list of names, and ask it to guess their country of origin.
Then I handed it the employee directory.
Then I searched by country to find native speakers of languages who can review our GUI translation.
Some people said they don't speak that language (e.g. they moved country when they were young, or the AI guessed wrong). Perhaps that was a little awkward, but people didn't usually mind being asked, and overall have been very helpful in this translation reviewing project.
I see the ".fr" in your profile; but, in the United States, that activity would almost certainly be a conversation with HR.
If you really, really wanted help with a translation project and you didn't want to pay, professional translators (which you should do since translation-by-meaning requires fluency or beyond in both languages), then there are more polite ways of asking this information than cold-calling every person with a "regional" sounding name and saying "hey, you know [presumed mother tongue]?"
There's nothing racist about what he said. It's not racist, or even particularly impolite, to nicely ask someone "hey, I noticed you have x name, are you from $country by any chance?"