The medium is the message here, the macbook is just bait.
The pure LLM is not effective on tabular data (so many transcripts of ChatGPT apologizing it got a calculation wrong.). To be working as well as it seems to work they must be loading results into something like a pandas data frame and having the agent write and run programs on that data frame, tap into stats and charting libraries, etc.
I’d trust it more if they showed more of the steps.
We’re using the new OpenAI assistants with the code interpreter feature, which allows you to ask questions of the model and have OpenAI turn those into python code that they run on their infra and pipe the output back into the model chat.
It’s really impressive and removes need for you to ask it for code and then run that locally. This is what powers many of the data analysis product features that are appearing recently (we’re building one ourselves for our incident data and it works pretty great!)
The pure LLM is not effective on tabular data (so many transcripts of ChatGPT apologizing it got a calculation wrong.). To be working as well as it seems to work they must be loading results into something like a pandas data frame and having the agent write and run programs on that data frame, tap into stats and charting libraries, etc.
I’d trust it more if they showed more of the steps.