Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you give the LLM more context (i.e. the player's past actions) and some directions such as "don't make any suggestions", I think you can get good results. Yes, they may not always be perfect.

For the types of responses, needed long term context shouldn't be required.

Anyway, I noticed that on a standard laptop, getting a response from the Chrome Prompt API running Gemini nano takes a rather long time.



I guess I'm skeptical you can steer current LLMs in this way, or at least, this hasn't been my experience.

I also worry not only about the loss of a shared experience among players, but also about the authorial voice... I find LLMs (even when prompted) tend to have an instantly recognizable style that I find unappealing.

Yes, all of this -- voice, tone, narrative direction -- could in theory be fixed with the right prompts, but I remain skeptical. From theory to practice seems such a long way...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: