It's not just distorting discussions it's leading people to put a lot of faith in what LLMs are telling them. Was just on a zoom an hour ago where a guy working on a startup asked ChatGPT about his idea and then emailed us the result for discussion in the meeting. ChatGPT basically just told him what he wanted to hear - essentially that his idea was great and it would be successful ("if you implement it correctly" was doing a lot of work). It was a glowing endorsement of the idea that made the guy think that he must have a million dollar idea. I had to be "that guy" who said that maybe ChatGPT was telling him what he wanted to hear based on the way the question was formulated - tried to be very diplomatic about it and maybe I was a bit too diplomatic because it didn't shake his faith in what ChatGPT had told him.
LLMs directly exploit a human trust vuln. Our brains tend to engage with them relationally and create an unconscious functional belief that an agent on the other end is responding with their real thoughts, even when we know better.
AI apps ought to at minimum warn us that their responses are not anyone's (or anything's) real thoughts. But the illusion is so powerful that many people would ignore the warning.