I've never really bought this argument. There might be unethical growth hackers in the company, but A/B tests are not the problem in that situation.
In my experience, they mostly just catch bugs. Stuff like "hmm, our much better looking signup flow underperforms... oh, the form is broken on Safari." That kind of thing.
To make it easier for everyone to understand the other posters point...
a double blind test can help you determine the most effective way to cause pain to a monkey, but it will never answer the question of whether you should be doing so.
Think of the AI optimizing for a stated goal while ignoring implied constraints (e.g. eradicating humanity to stop wars).
This can be the case for A/B testing. Sure, you can increase ad clicks by 30% ... if you trick the user into clicking it through a carefully timed layout jump.
I think GP's argumentation may go in this direction. I'd probably not say A/B testing is the problem itself, it is a tool after all, but I could imagine it's sometimes not used very well.
Another point: Spotify's core flow changes so much (feels like almost daily) that I've lost all confidence in using it.
In my experience, they mostly just catch bugs. Stuff like "hmm, our much better looking signup flow underperforms... oh, the form is broken on Safari." That kind of thing.