If someone is taking the time to refute ChatGPT’s output and telling you why the answer isn’t applicable in a given situation, it certainly implies that ChatGPT wasn’t “correct enough” at all.
What situations do you think it’s fine to be “correct enough?”
But this person says they want to refute it in every situation.
Some people seem to make a hobby of refuting the output of others. So no, I don’t trust the implication that if somebody spends time refuting it that it must be worth refuting.
In my experience (with both people-output and ChatGPT-output) my goal is to not refute anything unless it absolutely positively must be refuted. If it’s a low-stakes situation where another person has an idea that seems like it might/will probably work, let them go nuts and give it a shot. I’ll offer quick feedback or guiding questions but I have 0 interest in refuting it even if I think there’s a chance it’ll go wrong. They can learn by doing.