Personally I'll wait for OpenAI to add this feature directly. I'm sure they're working on it.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
I regularly use the request data button on ChatGPT, then parse the JSON and output HTML for each conversation, and a DB to search for the file(s) when needed.
I learned today, that o-1 is able to search through all chats and can find and verify if the findings are relevant to the actual context. i found that very usefull as i have a lot of very long chats regarding only one project.
ChatGpt lists the findings with the date and context and searches further back if asked for it. (in my case summer 2024)
I feel the same. You might want to check out https://martiansoftware.com/chatkeeper for a non-extension option. It's a cli that syncs a ChatGPT export with local markdown files. (full disclosure: it's my project)
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
You might want to check out <https://martiansoftware.com/chatkeeper>. It's a cli that syncs a ChatGPT export with local markdown files. I use it to keep my conversation history in Obsidian, where I can search through and link my conversations to my other notes. (full disclosure: it's my project)
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
The fact that you even need something like this shows how far we are from truly useful language models. Because ideally they should have all of the context of all of the messages in their mind, and so far we've had to manually manage that context for them.
To be fair this is less a language model problem, and more in the application layer around them.
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
> Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
You need something like this if you want to use them as a reminder. Even if LLMs could remind you of past chats, they wouldn’t know which chats you want to be reminded of. It’s like marking chats as favorites. You actually have to mark them yourself, for anyone to know which chats are your favorites.
I made a tool called Slate AI[0] that you can run locally. It organizes your chat threads and image generations into tabs that you can save on your local disk (as JSON files in Open AI API format) and reload to continue later. Exports markdown, too.
Another feature that I find shockingly absent from
most web-based chat providers is autocomplete, I.e copilot-like suggestions to complete what your typing. Typing long text into chat boxes quickly becomes tedious and having context-based autocomplete helps a lot — you can experience this within AI-IDEs like zed or cursor etc — in fact I often resort to using those just for this feature.
There are two technologies I use every day that demonstrate a company is capable of solving an incredibly hard problem, X, while completely dropping the ball on the presumable easier part of UX, Y. ChatGPT is one of those. Driving in my Tesla is the other. I'm not sure how or why it happens but I think it about it daily.
Ineffective dog fooding. PMs might use it every day but they only use a subset of functionality. Some engineers may intentionally never use it when they get home because they’re so sick of looking at it. Some engineers doing crazy esoteric but it doesn’t propagate because their heads are down within the org. Most people are only showcasing exclusively happy paths to leadership, sorry I meant management. Executives only using it for emails, demos, and again a limited subset of happy paths.
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
Of course. But I use it dozens of times a day across dozens of projects. Many of the concepts are linked together. Intelligently indexing, linking, and referencing them seems like a pretty obvious feature. I doubt I'm in the minority in expecting this.
"Harder [for the organization in question] to solve" is definitely right
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
Honestly I'm pretty sick of ChatGPT anymore. It completely ignores custom instructions, loses context insanely quickly, has bugs with working with canvas where it puts the code in the chat instead of updating canvas, the Project feature is half baked and is very terrible experience, GPTs are just really stupid and also half baked. I started using Le Chat (Mistral) again and, honestly, the conversations there are much more fun. Tons of issues with that one as well, but I am happier using it haha. Ended up using a desktop app that lets me control the system prompt against Mistral's API and couldn't be happier.
I’ve been wondering if OpenAI’s updates ruin other applications or utility? Kinda makes me skeptical of global models. Obviously OpenAI is optimizing and changing how they train their models. Makes sense to me you’d lose some of the quirks. I think this is actually the promise of open weight models. Personally, I’ve never used ChatGPT, Claude, whatever. I’ve found a decent amount of utility in just the open weight models.
I am right there with you on the open weight models. Only reason I started back on ChatGPT is because I'm rebuilding my servers and had to take out my 3090Ti. That thing is huge and I needed the room for hard drives until my 3.5" to 5.25" adapters come in.
I built my own client (llmpad.com) to originally solve this problem, as well as using other LLM's and features. A little surprised others have not done this too?
You can try llmpad soon, feel free to message me.
Just a reminder that LibreChat exists. It's FOSS, and lets you bring your own APIs key for all the LLM providers. The UI is excellent.
You can run it locally, or as I do on a $5/month Linode server. I don't want to pay ~20/month for each LLM provider, so I put $5 to $10 on my Anthropic and OpenAI API accounts every couple months, and that lasts me plenty long.
You get to save all your chats, change models mid-chat, view code artifacts, create presets, and much more.
If you don't know how to set up something like this, ask ChatGPT or Claude. They will walk you through it, and you will learn a useful skill. It's shockingly easy.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.