What are you working on specifically? I've been vaguely following poker research since Libratus, the last paper I've read is ReBeL, has there been any meaningful progress after that?
I was thinking about developing a 5-max poker agent that can play decently (not superhumanly), but it still seems like a kind of uncharted territory, there's Pluribus but limited to fixed stacks, very complex and very computationally demanding to train and I think also during gameplay.
I don't see why a LLM can't learn to play a mixed strategy. A LLM outputs a distribution over all tokens, which is then randomly sampled from.
Text trained LLM's are likely not a good solution for optimal play, just as in chess the position changes too much, there's too much exploration, and too much accuracy needed.
CFR is still the best, however, like chess, we need a network that can help evaluate the position. Unlike chess, the hard part isn't knowing a value; it's knowing what the current game position is. For that, we need something unique.
I'm pretty convinced that this is solvable. I've been working on rs-poker for quite a while. Right now we have a whole multi-handed arena implemented, and a multi-threaded counterfactual framework (multi-threaded, with no memory fragmentation, and good cache coherency)
With BERT and some clever sequence encoding we can create a powerful agent. If anyone is interested, my email is: elliott.neil.clark@gmail.com
I'm not working on game-related topics lately, I'm in the industry now (algo-trading) and also little bit out of touch.
> Has there been any meaningful progress after that?
There are attempts [0] at making the algorithms work for exponentially large beliefs (=ranges). In poker, these are constant-sized (players receive 2 cards in the beginning), which is not the case in most games. In many games you repeatedly draw cards from a deck and the number of histories/infosets grows exponentially.
But nothing works well for search yet, and it is still open problem. For just policy learning without search, RNAD [2] works okayish from what I heard, but it is finicky with hyperparameters to get it to converge.
Most of the research I saw is concerned about making regret minimization more efficient, most notably Predictive Regret Matching [1]
> I was thinking about developing a 5-max poker
Oh, sounds like lot of fun!
> I don't see why a LLM can't learn to play a mixed strategy. A LLM outputs a distribution over all tokens, which is then randomly sampled from.
I tend to agree, I wrote more in another comment. It's just not something an off-the-shelf LLM would do reliably today without lots of non-trivial modifications.
I was thinking about developing a 5-max poker agent that can play decently (not superhumanly), but it still seems like a kind of uncharted territory, there's Pluribus but limited to fixed stacks, very complex and very computationally demanding to train and I think also during gameplay.
I don't see why a LLM can't learn to play a mixed strategy. A LLM outputs a distribution over all tokens, which is then randomly sampled from.