Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Of course the model may hallucinate, but in this case it takes a few clicks in the dev tools to verify that this is not the case.

I don't know - or care to figure out - how OpenAI does their tool calling in this specific case. But moving tool calls to the end user is _monumentally_ stupid for the latency if nothing else. If you centralize your function calls to a single model next to a fat pipe it means that you halve the latency of each call. I've never build, or seen, a function calling agent that moves the api function calls to client side JS.



It's not client side, the messages are in the api though.

But what do you mean you don't care? The thing you were responding to was literally a claim that it was a tool call rather than direct output


None of this really matters. It could be either case.

The thing we need to worry about is whether a Chinese company will drop an open source equivalent.


You should check out Claude desktop or Roo-Code or any of the other MCP client capable hosts. The whole idea of MCP is providing a universal pluggable tool api to the generative model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: