It's hard to know what "tuned it" means, but they're using an AI model to detect harmful content. So it's very possible that the AI model was always trained on this biased data, but as they made that model more aggressive, it exposed more of it's biases
They're defining harmful content based on political orthodoxy, like every other censorious tinpot dictatorship in history. The objective remains the same too; promote a thought monoculture and propagate the political orthodoxy. The original article makes it very clear that it has nothing to do with preventing harm or promoting equality or any of the other nonsense this is being giftwrapped in.
That's a bit too conspiratorial. The much more mundane reality is that they are defining hateful content based on what is most likely to get breathless, angry, censorious screeds in the media written about how their company is perpetuating (some injustice) if not literally killing people by way of putting the wrong words on the screen.
Culture war outrage drives clicks, it's that simple.
That's fine so long as you identify the tyrants correctly. It's not the people trying to avoid getting screamed at that are the ones you should be accusing.