Last week there we had a customer request that landed in our support on a feature that I partially wrote and wrote a pile of public documentation on. Support engineer ran customer query through Claude (trained on our public and internal docs) and it very, very confidently made a bunch of stuff up in the response. It was quite plausible sounding and it would have been great if it worked that way, but it didn't. While explaining why it was wrong in a Slack thread with support engineer and another engineer who also worked on that feature, he ran Augment (that has full source code of the feature) which promptly and also very confidently made up more stuff (but different!). Some choice bleeding eye emojis were exchanged. I'm going to continue to use my own intelligence, thank you.
Popular HN threads about anything AI related always attract stories highlighting AI failures. It's such a common pattern I want to analyze it and get numbers. (which might require AI...)
Popular HN threads about anything AI related always attract stories highlighting AI success. It's such a common pattern I want to analyze it and get numbers. (which might require to use my brain...)
It's relevant because it shows models haven't improved as much as the companies delivering them would like you to believe no matter what mode (or code) they work under. Developers are quickly transforming from code writers to code readers and the good ones feel demoralized knowing that they could do it better themselves but are instead forced to read gibberish produced by a machine in the volume of dozens of lines per second. Moreover, when they are reviewing that gibberish and it doesn't make sense, even if they provide arguments, that same gibberish-producing machine can, in a matter of seconds, write counter-arguments that look convincing but lack any kind of substance for those who understand and try to read it.
Edit: I am saying it as a developer who is using LLMs for coding, so I feel that I can constructively criticize them. Also, sometimes the code actually works when I put enough effort to describe what I expect; I guess I could just write the code myself but the problem is that I don't know which way it will result in a quicker delivery.
Relying on the model’s own “memory” to answer factual queries is almost always a mistake. Fine-tuning is almost always a more complex, more expensive and less effective method to give a model access to a knowledge base.
However using the model as a multi-hop search robot, leveraging it’s general background knowledge to guide the research flow and interpret findings, works exceedingly well.
Training with RL to optimize research tool use and reasoning is the way forward, at least until we have proper Stateful LLMs that can effectively manage an internal memory (as in Neural Turing Machines, and such).
"trained on our public and internal docs" trained how? Did you mean fine-tuned haiku? Did you actually fine tune correctly? Its not even a recommended architecture.
Or did you just misuse basic terminology about LLMs and are now saying it misbehaved, likely because your org did something very bad with?
>> Support engineer ran customer query through Claude (trained on our public and internal docs) and it very, very confidently made a bunch of stuff up in the response.
> Yeah, LLMs are not really good about things that can't be done.
From the GP's description, this situation was not a case of "things that can't be done", but instead was the result of a statistically generated document having what should be the expected result:
It was quite plausible sounding and it would have been
great if it worked that way, but it didn't.
The core issue is likely not with the LLM itself. Given sufficient context, instructions, and purposeful agents, a DAG of these will not produce such consistently wrong results with good grounding context
There are a lot of devils in the details and there are few in the story
They are trained with 100% true facts and sucessfull paths.
We humans grec our analysis/reasoning skills towards the 99.9999% failed attempts of everything we did, uncessfull
trials and errors, wastefull times and frustrations.
So we know that behind a truth, there’s a bigger world of fantasy.
For LLM, everything is just a fantasy. Everything is as much true as it’s opposite. It will need a lot more than the truth to build intelligence, it will require controled malice and deceptions